Test Report: Docker_Linux_crio_arm64 19319

                    
                      b956d22c0e4b666a5d5401b6edb64a8355930c4b:2024-07-23:35468
                    
                

Test fail (4/330)

Order failed test Duration
39 TestAddons/parallel/Ingress 152.06
41 TestAddons/parallel/MetricsServer 317.84
106 TestFunctional/parallel/PersistentVolumeClaim 201.2
274 TestPause/serial/SecondStartNoReconfiguration 26.84
x
+
TestAddons/parallel/Ingress (152.06s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-140056 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-140056 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-140056 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [ee131cce-0452-476b-b4dc-b7bbceebd9b3] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [ee131cce-0452-476b-b4dc-b7bbceebd9b3] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003811324s
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-140056 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-140056 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.121469223s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-140056 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-140056 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p addons-140056 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-arm64 -p addons-140056 addons disable ingress-dns --alsologtostderr -v=1: (1.341511956s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-arm64 -p addons-140056 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-arm64 -p addons-140056 addons disable ingress --alsologtostderr -v=1: (7.725448022s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-140056
helpers_test.go:235: (dbg) docker inspect addons-140056:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b9b70e7c5302d0eb24d99157f9235e3b557101f19b1860e598af9393850f4004",
	        "Created": "2024-07-23T14:27:52.86795676Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 3324574,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-07-23T14:27:53.006461012Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:71a7ac3dcc1f66f9b927c200bbaca5de093c77584a8e2cceb20f7c37b7028780",
	        "ResolvConfPath": "/var/lib/docker/containers/b9b70e7c5302d0eb24d99157f9235e3b557101f19b1860e598af9393850f4004/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b9b70e7c5302d0eb24d99157f9235e3b557101f19b1860e598af9393850f4004/hostname",
	        "HostsPath": "/var/lib/docker/containers/b9b70e7c5302d0eb24d99157f9235e3b557101f19b1860e598af9393850f4004/hosts",
	        "LogPath": "/var/lib/docker/containers/b9b70e7c5302d0eb24d99157f9235e3b557101f19b1860e598af9393850f4004/b9b70e7c5302d0eb24d99157f9235e3b557101f19b1860e598af9393850f4004-json.log",
	        "Name": "/addons-140056",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-140056:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-140056",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/6a6bb241bc0a3e4465e1a43aee1f75c8fc97f694270fdd11e35d031c22d4e2f5-init/diff:/var/lib/docker/overlay2/cc3f8b49bb50b989dafe94ead705091dcc80edbdd409e161d5028bc93b57b742/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6a6bb241bc0a3e4465e1a43aee1f75c8fc97f694270fdd11e35d031c22d4e2f5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6a6bb241bc0a3e4465e1a43aee1f75c8fc97f694270fdd11e35d031c22d4e2f5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6a6bb241bc0a3e4465e1a43aee1f75c8fc97f694270fdd11e35d031c22d4e2f5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-140056",
	                "Source": "/var/lib/docker/volumes/addons-140056/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-140056",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-140056",
	                "name.minikube.sigs.k8s.io": "addons-140056",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6c675d27c4aef7dfa8be6dc67cd724b8c6f2d1428cbca9863b30b6f781624761",
	            "SandboxKey": "/var/run/docker/netns/6c675d27c4ae",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "37152"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "37153"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "37156"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "37154"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "37155"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-140056": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "e79c422ee0f203263c66e8d1be99ca58a269a2578e91f8ab8004aa4f5b89e281",
	                    "EndpointID": "0b5c2016faff0288e1cfa9c7c76c429ffdf9591e99e7f78251d58736438d6377",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-140056",
	                        "b9b70e7c5302"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-140056 -n addons-140056
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-140056 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-140056 logs -n 25: (1.448259369s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-547065                                                                     | download-only-547065   | jenkins | v1.33.1 | 23 Jul 24 14:27 UTC | 23 Jul 24 14:27 UTC |
	| delete  | -p download-only-438325                                                                     | download-only-438325   | jenkins | v1.33.1 | 23 Jul 24 14:27 UTC | 23 Jul 24 14:27 UTC |
	| delete  | -p download-only-292108                                                                     | download-only-292108   | jenkins | v1.33.1 | 23 Jul 24 14:27 UTC | 23 Jul 24 14:27 UTC |
	| delete  | -p download-only-547065                                                                     | download-only-547065   | jenkins | v1.33.1 | 23 Jul 24 14:27 UTC | 23 Jul 24 14:27 UTC |
	| start   | --download-only -p                                                                          | download-docker-248386 | jenkins | v1.33.1 | 23 Jul 24 14:27 UTC |                     |
	|         | download-docker-248386                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-248386                                                                   | download-docker-248386 | jenkins | v1.33.1 | 23 Jul 24 14:27 UTC | 23 Jul 24 14:27 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-953180   | jenkins | v1.33.1 | 23 Jul 24 14:27 UTC |                     |
	|         | binary-mirror-953180                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:39823                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-953180                                                                     | binary-mirror-953180   | jenkins | v1.33.1 | 23 Jul 24 14:27 UTC | 23 Jul 24 14:27 UTC |
	| addons  | enable dashboard -p                                                                         | addons-140056          | jenkins | v1.33.1 | 23 Jul 24 14:27 UTC |                     |
	|         | addons-140056                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-140056          | jenkins | v1.33.1 | 23 Jul 24 14:27 UTC |                     |
	|         | addons-140056                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-140056 --wait=true                                                                | addons-140056          | jenkins | v1.33.1 | 23 Jul 24 14:27 UTC | 23 Jul 24 14:31 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-140056          | jenkins | v1.33.1 | 23 Jul 24 14:31 UTC | 23 Jul 24 14:31 UTC |
	|         | -p addons-140056                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-140056 ip                                                                            | addons-140056          | jenkins | v1.33.1 | 23 Jul 24 14:31 UTC | 23 Jul 24 14:31 UTC |
	| addons  | addons-140056 addons disable                                                                | addons-140056          | jenkins | v1.33.1 | 23 Jul 24 14:31 UTC | 23 Jul 24 14:31 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-140056          | jenkins | v1.33.1 | 23 Jul 24 14:31 UTC | 23 Jul 24 14:31 UTC |
	|         | -p addons-140056                                                                            |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-140056          | jenkins | v1.33.1 | 23 Jul 24 14:31 UTC | 23 Jul 24 14:31 UTC |
	|         | addons-140056                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-140056 ssh cat                                                                       | addons-140056          | jenkins | v1.33.1 | 23 Jul 24 14:31 UTC | 23 Jul 24 14:31 UTC |
	|         | /opt/local-path-provisioner/pvc-4719a5dc-20ce-42e3-9843-cd46009709ea_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-140056 addons disable                                                                | addons-140056          | jenkins | v1.33.1 | 23 Jul 24 14:31 UTC | 23 Jul 24 14:32 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-140056 addons                                                                        | addons-140056          | jenkins | v1.33.1 | 23 Jul 24 14:32 UTC | 23 Jul 24 14:32 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-140056 addons                                                                        | addons-140056          | jenkins | v1.33.1 | 23 Jul 24 14:32 UTC | 23 Jul 24 14:32 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-140056          | jenkins | v1.33.1 | 23 Jul 24 14:32 UTC | 23 Jul 24 14:32 UTC |
	|         | addons-140056                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-140056 ssh curl -s                                                                   | addons-140056          | jenkins | v1.33.1 | 23 Jul 24 14:32 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-140056 ip                                                                            | addons-140056          | jenkins | v1.33.1 | 23 Jul 24 14:34 UTC | 23 Jul 24 14:34 UTC |
	| addons  | addons-140056 addons disable                                                                | addons-140056          | jenkins | v1.33.1 | 23 Jul 24 14:34 UTC | 23 Jul 24 14:34 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-140056 addons disable                                                                | addons-140056          | jenkins | v1.33.1 | 23 Jul 24 14:34 UTC | 23 Jul 24 14:35 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/23 14:27:28
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0723 14:27:28.676695 3324089 out.go:291] Setting OutFile to fd 1 ...
	I0723 14:27:28.676884 3324089 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 14:27:28.676913 3324089 out.go:304] Setting ErrFile to fd 2...
	I0723 14:27:28.676933 3324089 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 14:27:28.677188 3324089 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19319-3317687/.minikube/bin
	I0723 14:27:28.677639 3324089 out.go:298] Setting JSON to false
	I0723 14:27:28.678557 3324089 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":83395,"bootTime":1721661454,"procs":154,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0723 14:27:28.678658 3324089 start.go:139] virtualization:  
	I0723 14:27:28.681154 3324089 out.go:177] * [addons-140056] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0723 14:27:28.683405 3324089 out.go:177]   - MINIKUBE_LOCATION=19319
	I0723 14:27:28.683575 3324089 notify.go:220] Checking for updates...
	I0723 14:27:28.687447 3324089 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0723 14:27:28.689253 3324089 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19319-3317687/kubeconfig
	I0723 14:27:28.690933 3324089 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19319-3317687/.minikube
	I0723 14:27:28.692548 3324089 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0723 14:27:28.694491 3324089 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0723 14:27:28.696530 3324089 driver.go:392] Setting default libvirt URI to qemu:///system
	I0723 14:27:28.717550 3324089 docker.go:123] docker version: linux-27.1.0:Docker Engine - Community
	I0723 14:27:28.717682 3324089 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0723 14:27:28.779563 3324089 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-07-23 14:27:28.76973313 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1065-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.0]] Warnings:<nil>}}
	I0723 14:27:28.779685 3324089 docker.go:307] overlay module found
	I0723 14:27:28.781695 3324089 out.go:177] * Using the docker driver based on user configuration
	I0723 14:27:28.783669 3324089 start.go:297] selected driver: docker
	I0723 14:27:28.783686 3324089 start.go:901] validating driver "docker" against <nil>
	I0723 14:27:28.783700 3324089 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0723 14:27:28.784352 3324089 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0723 14:27:28.838679 3324089 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-07-23 14:27:28.830269269 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1065-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.0]] Warnings:<nil>}}
	I0723 14:27:28.838848 3324089 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0723 14:27:28.839084 3324089 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0723 14:27:28.841437 3324089 out.go:177] * Using Docker driver with root privileges
	I0723 14:27:28.843754 3324089 cni.go:84] Creating CNI manager for ""
	I0723 14:27:28.843775 3324089 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0723 14:27:28.843787 3324089 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0723 14:27:28.843870 3324089 start.go:340] cluster config:
	{Name:addons-140056 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-140056 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 14:27:28.846142 3324089 out.go:177] * Starting "addons-140056" primary control-plane node in "addons-140056" cluster
	I0723 14:27:28.847849 3324089 cache.go:121] Beginning downloading kic base image for docker with crio
	I0723 14:27:28.849949 3324089 out.go:177] * Pulling base image v0.0.44-1721687125-19319 ...
	I0723 14:27:28.852051 3324089 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0723 14:27:28.852074 3324089 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae in local docker daemon
	I0723 14:27:28.852096 3324089 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19319-3317687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4
	I0723 14:27:28.852104 3324089 cache.go:56] Caching tarball of preloaded images
	I0723 14:27:28.852193 3324089 preload.go:172] Found /home/jenkins/minikube-integration/19319-3317687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0723 14:27:28.852205 3324089 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0723 14:27:28.852570 3324089 profile.go:143] Saving config to /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/config.json ...
	I0723 14:27:28.852602 3324089 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/config.json: {Name:mk80729d63297d5bf8076b3f30a05eb0be283ee2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 14:27:28.866706 3324089 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae to local cache
	I0723 14:27:28.866828 3324089 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae in local cache directory
	I0723 14:27:28.866847 3324089 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae in local cache directory, skipping pull
	I0723 14:27:28.866853 3324089 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae exists in cache, skipping pull
	I0723 14:27:28.866859 3324089 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae as a tarball
	I0723 14:27:28.866864 3324089 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae from local cache
	I0723 14:27:45.818570 3324089 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae from cached tarball
	I0723 14:27:45.818608 3324089 cache.go:194] Successfully downloaded all kic artifacts
	I0723 14:27:45.818653 3324089 start.go:360] acquireMachinesLock for addons-140056: {Name:mk87e835be44b124ffc36d4dd9b3cf7b09db44cb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 14:27:45.819355 3324089 start.go:364] duration metric: took 675.553µs to acquireMachinesLock for "addons-140056"
	I0723 14:27:45.819391 3324089 start.go:93] Provisioning new machine with config: &{Name:addons-140056 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-140056 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0723 14:27:45.819481 3324089 start.go:125] createHost starting for "" (driver="docker")
	I0723 14:27:45.821588 3324089 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0723 14:27:45.821841 3324089 start.go:159] libmachine.API.Create for "addons-140056" (driver="docker")
	I0723 14:27:45.821879 3324089 client.go:168] LocalClient.Create starting
	I0723 14:27:45.821998 3324089 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19319-3317687/.minikube/certs/ca.pem
	I0723 14:27:46.093836 3324089 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19319-3317687/.minikube/certs/cert.pem
	I0723 14:27:46.464651 3324089 cli_runner.go:164] Run: docker network inspect addons-140056 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0723 14:27:46.480067 3324089 cli_runner.go:211] docker network inspect addons-140056 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0723 14:27:46.480159 3324089 network_create.go:284] running [docker network inspect addons-140056] to gather additional debugging logs...
	I0723 14:27:46.480179 3324089 cli_runner.go:164] Run: docker network inspect addons-140056
	W0723 14:27:46.495772 3324089 cli_runner.go:211] docker network inspect addons-140056 returned with exit code 1
	I0723 14:27:46.495804 3324089 network_create.go:287] error running [docker network inspect addons-140056]: docker network inspect addons-140056: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-140056 not found
	I0723 14:27:46.495817 3324089 network_create.go:289] output of [docker network inspect addons-140056]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-140056 not found
	
	** /stderr **
	I0723 14:27:46.495927 3324089 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0723 14:27:46.519578 3324089 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400178c6c0}
	I0723 14:27:46.519624 3324089 network_create.go:124] attempt to create docker network addons-140056 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0723 14:27:46.519686 3324089 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-140056 addons-140056
	I0723 14:27:46.585952 3324089 network_create.go:108] docker network addons-140056 192.168.49.0/24 created
	I0723 14:27:46.585988 3324089 kic.go:121] calculated static IP "192.168.49.2" for the "addons-140056" container
	I0723 14:27:46.586062 3324089 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0723 14:27:46.601629 3324089 cli_runner.go:164] Run: docker volume create addons-140056 --label name.minikube.sigs.k8s.io=addons-140056 --label created_by.minikube.sigs.k8s.io=true
	I0723 14:27:46.619041 3324089 oci.go:103] Successfully created a docker volume addons-140056
	I0723 14:27:46.619143 3324089 cli_runner.go:164] Run: docker run --rm --name addons-140056-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-140056 --entrypoint /usr/bin/test -v addons-140056:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae -d /var/lib
	I0723 14:27:48.625028 3324089 cli_runner.go:217] Completed: docker run --rm --name addons-140056-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-140056 --entrypoint /usr/bin/test -v addons-140056:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae -d /var/lib: (2.005837296s)
	I0723 14:27:48.625062 3324089 oci.go:107] Successfully prepared a docker volume addons-140056
	I0723 14:27:48.625087 3324089 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0723 14:27:48.625106 3324089 kic.go:194] Starting extracting preloaded images to volume ...
	I0723 14:27:48.625207 3324089 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19319-3317687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-140056:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae -I lz4 -xf /preloaded.tar -C /extractDir
	I0723 14:27:52.803707 3324089 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19319-3317687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-140056:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae -I lz4 -xf /preloaded.tar -C /extractDir: (4.178443041s)
	I0723 14:27:52.803742 3324089 kic.go:203] duration metric: took 4.178632163s to extract preloaded images to volume ...
	W0723 14:27:52.803880 3324089 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0723 14:27:52.804000 3324089 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0723 14:27:52.853957 3324089 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-140056 --name addons-140056 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-140056 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-140056 --network addons-140056 --ip 192.168.49.2 --volume addons-140056:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae
	I0723 14:27:53.197662 3324089 cli_runner.go:164] Run: docker container inspect addons-140056 --format={{.State.Running}}
	I0723 14:27:53.222518 3324089 cli_runner.go:164] Run: docker container inspect addons-140056 --format={{.State.Status}}
	I0723 14:27:53.246584 3324089 cli_runner.go:164] Run: docker exec addons-140056 stat /var/lib/dpkg/alternatives/iptables
	I0723 14:27:53.317873 3324089 oci.go:144] the created container "addons-140056" has a running status.
	I0723 14:27:53.317900 3324089 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19319-3317687/.minikube/machines/addons-140056/id_rsa...
	I0723 14:27:53.750712 3324089 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19319-3317687/.minikube/machines/addons-140056/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0723 14:27:53.771604 3324089 cli_runner.go:164] Run: docker container inspect addons-140056 --format={{.State.Status}}
	I0723 14:27:53.791817 3324089 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0723 14:27:53.791836 3324089 kic_runner.go:114] Args: [docker exec --privileged addons-140056 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0723 14:27:53.865928 3324089 cli_runner.go:164] Run: docker container inspect addons-140056 --format={{.State.Status}}
	I0723 14:27:53.890330 3324089 machine.go:94] provisionDockerMachine start ...
	I0723 14:27:53.890421 3324089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-140056
	I0723 14:27:53.914772 3324089 main.go:141] libmachine: Using SSH client type: native
	I0723 14:27:53.915031 3324089 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 37152 <nil> <nil>}
	I0723 14:27:53.915039 3324089 main.go:141] libmachine: About to run SSH command:
	hostname
	I0723 14:27:54.078473 3324089 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-140056
	
	I0723 14:27:54.078495 3324089 ubuntu.go:169] provisioning hostname "addons-140056"
	I0723 14:27:54.078587 3324089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-140056
	I0723 14:27:54.098059 3324089 main.go:141] libmachine: Using SSH client type: native
	I0723 14:27:54.098297 3324089 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 37152 <nil> <nil>}
	I0723 14:27:54.098309 3324089 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-140056 && echo "addons-140056" | sudo tee /etc/hostname
	I0723 14:27:54.245063 3324089 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-140056
	
	I0723 14:27:54.245156 3324089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-140056
	I0723 14:27:54.268308 3324089 main.go:141] libmachine: Using SSH client type: native
	I0723 14:27:54.268555 3324089 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 37152 <nil> <nil>}
	I0723 14:27:54.268576 3324089 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-140056' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-140056/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-140056' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0723 14:27:54.406555 3324089 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0723 14:27:54.406644 3324089 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19319-3317687/.minikube CaCertPath:/home/jenkins/minikube-integration/19319-3317687/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19319-3317687/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19319-3317687/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19319-3317687/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19319-3317687/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19319-3317687/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19319-3317687/.minikube}
	I0723 14:27:54.406702 3324089 ubuntu.go:177] setting up certificates
	I0723 14:27:54.406734 3324089 provision.go:84] configureAuth start
	I0723 14:27:54.406846 3324089 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-140056
	I0723 14:27:54.422125 3324089 provision.go:143] copyHostCerts
	I0723 14:27:54.422200 3324089 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-3317687/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19319-3317687/.minikube/ca.pem (1082 bytes)
	I0723 14:27:54.422322 3324089 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-3317687/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19319-3317687/.minikube/cert.pem (1123 bytes)
	I0723 14:27:54.422388 3324089 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-3317687/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19319-3317687/.minikube/key.pem (1679 bytes)
	I0723 14:27:54.422442 3324089 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19319-3317687/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19319-3317687/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19319-3317687/.minikube/certs/ca-key.pem org=jenkins.addons-140056 san=[127.0.0.1 192.168.49.2 addons-140056 localhost minikube]
	I0723 14:27:54.769995 3324089 provision.go:177] copyRemoteCerts
	I0723 14:27:54.770070 3324089 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0723 14:27:54.770113 3324089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-140056
	I0723 14:27:54.786286 3324089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37152 SSHKeyPath:/home/jenkins/minikube-integration/19319-3317687/.minikube/machines/addons-140056/id_rsa Username:docker}
	I0723 14:27:54.875157 3324089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3317687/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0723 14:27:54.899396 3324089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3317687/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0723 14:27:54.922996 3324089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3317687/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0723 14:27:54.946782 3324089 provision.go:87] duration metric: took 540.018184ms to configureAuth
	I0723 14:27:54.946812 3324089 ubuntu.go:193] setting minikube options for container-runtime
	I0723 14:27:54.946997 3324089 config.go:182] Loaded profile config "addons-140056": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 14:27:54.947116 3324089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-140056
	I0723 14:27:54.962770 3324089 main.go:141] libmachine: Using SSH client type: native
	I0723 14:27:54.963002 3324089 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 37152 <nil> <nil>}
	I0723 14:27:54.963023 3324089 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0723 14:27:55.190953 3324089 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0723 14:27:55.191039 3324089 machine.go:97] duration metric: took 1.300690298s to provisionDockerMachine
	I0723 14:27:55.191064 3324089 client.go:171] duration metric: took 9.369174116s to LocalClient.Create
	I0723 14:27:55.191116 3324089 start.go:167] duration metric: took 9.369276057s to libmachine.API.Create "addons-140056"
	I0723 14:27:55.191142 3324089 start.go:293] postStartSetup for "addons-140056" (driver="docker")
	I0723 14:27:55.191169 3324089 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0723 14:27:55.191274 3324089 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0723 14:27:55.191367 3324089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-140056
	I0723 14:27:55.207728 3324089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37152 SSHKeyPath:/home/jenkins/minikube-integration/19319-3317687/.minikube/machines/addons-140056/id_rsa Username:docker}
	I0723 14:27:55.299546 3324089 ssh_runner.go:195] Run: cat /etc/os-release
	I0723 14:27:55.302604 3324089 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0723 14:27:55.302643 3324089 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0723 14:27:55.302654 3324089 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0723 14:27:55.302661 3324089 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0723 14:27:55.302672 3324089 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-3317687/.minikube/addons for local assets ...
	I0723 14:27:55.302746 3324089 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-3317687/.minikube/files for local assets ...
	I0723 14:27:55.302773 3324089 start.go:296] duration metric: took 111.60936ms for postStartSetup
	I0723 14:27:55.303102 3324089 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-140056
	I0723 14:27:55.318688 3324089 profile.go:143] Saving config to /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/config.json ...
	I0723 14:27:55.318990 3324089 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0723 14:27:55.319042 3324089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-140056
	I0723 14:27:55.335150 3324089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37152 SSHKeyPath:/home/jenkins/minikube-integration/19319-3317687/.minikube/machines/addons-140056/id_rsa Username:docker}
	I0723 14:27:55.419277 3324089 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0723 14:27:55.423838 3324089 start.go:128] duration metric: took 9.604340903s to createHost
	I0723 14:27:55.423861 3324089 start.go:83] releasing machines lock for "addons-140056", held for 9.604491675s
	I0723 14:27:55.423934 3324089 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-140056
	I0723 14:27:55.439581 3324089 ssh_runner.go:195] Run: cat /version.json
	I0723 14:27:55.439654 3324089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-140056
	I0723 14:27:55.439945 3324089 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0723 14:27:55.440013 3324089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-140056
	I0723 14:27:55.456700 3324089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37152 SSHKeyPath:/home/jenkins/minikube-integration/19319-3317687/.minikube/machines/addons-140056/id_rsa Username:docker}
	I0723 14:27:55.464098 3324089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37152 SSHKeyPath:/home/jenkins/minikube-integration/19319-3317687/.minikube/machines/addons-140056/id_rsa Username:docker}
	I0723 14:27:55.546266 3324089 ssh_runner.go:195] Run: systemctl --version
	I0723 14:27:55.675664 3324089 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0723 14:27:55.816283 3324089 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0723 14:27:55.820584 3324089 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0723 14:27:55.840953 3324089 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0723 14:27:55.841038 3324089 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0723 14:27:55.870687 3324089 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0723 14:27:55.870712 3324089 start.go:495] detecting cgroup driver to use...
	I0723 14:27:55.870745 3324089 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0723 14:27:55.870803 3324089 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0723 14:27:55.886934 3324089 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0723 14:27:55.899267 3324089 docker.go:217] disabling cri-docker service (if available) ...
	I0723 14:27:55.899376 3324089 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0723 14:27:55.912780 3324089 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0723 14:27:55.927522 3324089 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0723 14:27:56.009262 3324089 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0723 14:27:56.098982 3324089 docker.go:233] disabling docker service ...
	I0723 14:27:56.099058 3324089 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0723 14:27:56.119778 3324089 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0723 14:27:56.131896 3324089 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0723 14:27:56.223929 3324089 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0723 14:27:56.323672 3324089 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0723 14:27:56.335627 3324089 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0723 14:27:56.351927 3324089 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0723 14:27:56.351998 3324089 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:27:56.361751 3324089 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0723 14:27:56.361889 3324089 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:27:56.372082 3324089 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:27:56.381803 3324089 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:27:56.391549 3324089 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0723 14:27:56.400775 3324089 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:27:56.410496 3324089 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:27:56.425884 3324089 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:27:56.435819 3324089 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0723 14:27:56.444410 3324089 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0723 14:27:56.452954 3324089 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 14:27:56.531977 3324089 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0723 14:27:56.644516 3324089 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0723 14:27:56.644650 3324089 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0723 14:27:56.648877 3324089 start.go:563] Will wait 60s for crictl version
	I0723 14:27:56.648962 3324089 ssh_runner.go:195] Run: which crictl
	I0723 14:27:56.652551 3324089 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0723 14:27:56.696220 3324089 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0723 14:27:56.696331 3324089 ssh_runner.go:195] Run: crio --version
	I0723 14:27:56.738494 3324089 ssh_runner.go:195] Run: crio --version
	I0723 14:27:56.780501 3324089 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.24.6 ...
	I0723 14:27:56.782339 3324089 cli_runner.go:164] Run: docker network inspect addons-140056 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0723 14:27:56.798501 3324089 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0723 14:27:56.801908 3324089 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0723 14:27:56.812808 3324089 kubeadm.go:883] updating cluster {Name:addons-140056 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-140056 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0723 14:27:56.812936 3324089 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0723 14:27:56.812995 3324089 ssh_runner.go:195] Run: sudo crictl images --output json
	I0723 14:27:56.891655 3324089 crio.go:514] all images are preloaded for cri-o runtime.
	I0723 14:27:56.891683 3324089 crio.go:433] Images already preloaded, skipping extraction
	I0723 14:27:56.891747 3324089 ssh_runner.go:195] Run: sudo crictl images --output json
	I0723 14:27:56.931318 3324089 crio.go:514] all images are preloaded for cri-o runtime.
	I0723 14:27:56.931338 3324089 cache_images.go:84] Images are preloaded, skipping loading
	I0723 14:27:56.931346 3324089 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.30.3 crio true true} ...
	I0723 14:27:56.931462 3324089 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-140056 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:addons-140056 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0723 14:27:56.931559 3324089 ssh_runner.go:195] Run: crio config
	I0723 14:27:56.979758 3324089 cni.go:84] Creating CNI manager for ""
	I0723 14:27:56.979786 3324089 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0723 14:27:56.979802 3324089 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0723 14:27:56.979827 3324089 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-140056 NodeName:addons-140056 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0723 14:27:56.979976 3324089 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-140056"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0723 14:27:56.980050 3324089 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0723 14:27:56.989213 3324089 binaries.go:44] Found k8s binaries, skipping transfer
	I0723 14:27:56.989297 3324089 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0723 14:27:56.998154 3324089 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0723 14:27:57.017400 3324089 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0723 14:27:57.036331 3324089 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0723 14:27:57.055318 3324089 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0723 14:27:57.058990 3324089 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0723 14:27:57.070178 3324089 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 14:27:57.153158 3324089 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0723 14:27:57.167266 3324089 certs.go:68] Setting up /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056 for IP: 192.168.49.2
	I0723 14:27:57.167288 3324089 certs.go:194] generating shared ca certs ...
	I0723 14:27:57.167304 3324089 certs.go:226] acquiring lock for ca certs: {Name:mk9061483da1430ff0fd8e32bc77025286e53111 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 14:27:57.168259 3324089 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19319-3317687/.minikube/ca.key
	I0723 14:27:57.481023 3324089 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19319-3317687/.minikube/ca.crt ...
	I0723 14:27:57.481097 3324089 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-3317687/.minikube/ca.crt: {Name:mkac5e6ee201c918e9f6812b3f036372d7b91909 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 14:27:57.481333 3324089 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19319-3317687/.minikube/ca.key ...
	I0723 14:27:57.481368 3324089 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-3317687/.minikube/ca.key: {Name:mk5044d99e3911a26057aa19d541ef688454b0bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 14:27:57.481508 3324089 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19319-3317687/.minikube/proxy-client-ca.key
	I0723 14:27:57.898965 3324089 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19319-3317687/.minikube/proxy-client-ca.crt ...
	I0723 14:27:57.899001 3324089 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-3317687/.minikube/proxy-client-ca.crt: {Name:mk52321785175ef4f7dd53b6748c34de00ade795 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 14:27:57.899224 3324089 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19319-3317687/.minikube/proxy-client-ca.key ...
	I0723 14:27:57.899239 3324089 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-3317687/.minikube/proxy-client-ca.key: {Name:mkf4f18e7a143e31fd6ffcc2466f4c28bfc32125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 14:27:57.899325 3324089 certs.go:256] generating profile certs ...
	I0723 14:27:57.899382 3324089 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/client.key
	I0723 14:27:57.899401 3324089 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/client.crt with IP's: []
	I0723 14:27:58.253169 3324089 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/client.crt ...
	I0723 14:27:58.253202 3324089 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/client.crt: {Name:mk3cb3b40a01ee6617f9deb0f299f2b0ed1c6ffa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 14:27:58.254041 3324089 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/client.key ...
	I0723 14:27:58.254057 3324089 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/client.key: {Name:mk65719a9b4f855f11034b945737dea15d736bd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 14:27:58.254152 3324089 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/apiserver.key.a9e22fa1
	I0723 14:27:58.254176 3324089 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/apiserver.crt.a9e22fa1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0723 14:27:58.395859 3324089 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/apiserver.crt.a9e22fa1 ...
	I0723 14:27:58.395891 3324089 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/apiserver.crt.a9e22fa1: {Name:mk77e7ff46e1d6669d88968c50a47abce2a5fb2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 14:27:58.396070 3324089 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/apiserver.key.a9e22fa1 ...
	I0723 14:27:58.396083 3324089 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/apiserver.key.a9e22fa1: {Name:mkfd1c7c65c3fb36f2123a3171695c1c8765d629 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 14:27:58.396166 3324089 certs.go:381] copying /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/apiserver.crt.a9e22fa1 -> /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/apiserver.crt
	I0723 14:27:58.396248 3324089 certs.go:385] copying /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/apiserver.key.a9e22fa1 -> /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/apiserver.key
	I0723 14:27:58.396306 3324089 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/proxy-client.key
	I0723 14:27:58.396323 3324089 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/proxy-client.crt with IP's: []
	I0723 14:27:58.660745 3324089 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/proxy-client.crt ...
	I0723 14:27:58.660778 3324089 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/proxy-client.crt: {Name:mk8b5193b638b3cd0f127ce6a2cfa785ff40ec62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 14:27:58.661544 3324089 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/proxy-client.key ...
	I0723 14:27:58.661562 3324089 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/proxy-client.key: {Name:mk19e0c6dc1a71614c2ec1d64282d70726deeb4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 14:27:58.661762 3324089 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-3317687/.minikube/certs/ca-key.pem (1679 bytes)
	I0723 14:27:58.661808 3324089 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-3317687/.minikube/certs/ca.pem (1082 bytes)
	I0723 14:27:58.661834 3324089 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-3317687/.minikube/certs/cert.pem (1123 bytes)
	I0723 14:27:58.661861 3324089 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-3317687/.minikube/certs/key.pem (1679 bytes)
	I0723 14:27:58.662461 3324089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3317687/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0723 14:27:58.687376 3324089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3317687/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0723 14:27:58.711841 3324089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3317687/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0723 14:27:58.735943 3324089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3317687/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0723 14:27:58.760301 3324089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0723 14:27:58.784325 3324089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0723 14:27:58.808116 3324089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0723 14:27:58.832466 3324089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0723 14:27:58.861271 3324089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3317687/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0723 14:27:58.891173 3324089 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0723 14:27:58.909318 3324089 ssh_runner.go:195] Run: openssl version
	I0723 14:27:58.914927 3324089 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0723 14:27:58.924118 3324089 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0723 14:27:58.927711 3324089 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 23 14:27 /usr/share/ca-certificates/minikubeCA.pem
	I0723 14:27:58.927775 3324089 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0723 14:27:58.934309 3324089 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0723 14:27:58.943727 3324089 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0723 14:27:58.946959 3324089 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0723 14:27:58.947006 3324089 kubeadm.go:392] StartCluster: {Name:addons-140056 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-140056 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 14:27:58.947099 3324089 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0723 14:27:58.947162 3324089 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0723 14:27:58.995350 3324089 cri.go:89] found id: ""
	I0723 14:27:58.995428 3324089 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0723 14:27:59.008656 3324089 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0723 14:27:59.017679 3324089 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0723 14:27:59.017776 3324089 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0723 14:27:59.026863 3324089 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0723 14:27:59.026887 3324089 kubeadm.go:157] found existing configuration files:
	
	I0723 14:27:59.026941 3324089 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0723 14:27:59.036137 3324089 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0723 14:27:59.036253 3324089 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0723 14:27:59.045064 3324089 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0723 14:27:59.053749 3324089 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0723 14:27:59.053833 3324089 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0723 14:27:59.062387 3324089 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0723 14:27:59.070863 3324089 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0723 14:27:59.070933 3324089 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0723 14:27:59.079291 3324089 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0723 14:27:59.088081 3324089 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0723 14:27:59.088151 3324089 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0723 14:27:59.097802 3324089 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0723 14:27:59.141713 3324089 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0723 14:27:59.141914 3324089 kubeadm.go:310] [preflight] Running pre-flight checks
	I0723 14:27:59.189397 3324089 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0723 14:27:59.189469 3324089 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1065-aws
	I0723 14:27:59.189511 3324089 kubeadm.go:310] OS: Linux
	I0723 14:27:59.189561 3324089 kubeadm.go:310] CGROUPS_CPU: enabled
	I0723 14:27:59.189611 3324089 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0723 14:27:59.189663 3324089 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0723 14:27:59.189712 3324089 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0723 14:27:59.189761 3324089 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0723 14:27:59.189815 3324089 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0723 14:27:59.189861 3324089 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0723 14:27:59.189910 3324089 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0723 14:27:59.189959 3324089 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0723 14:27:59.259922 3324089 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0723 14:27:59.260097 3324089 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0723 14:27:59.260228 3324089 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0723 14:27:59.493849 3324089 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0723 14:27:59.497773 3324089 out.go:204]   - Generating certificates and keys ...
	I0723 14:27:59.497865 3324089 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0723 14:27:59.497933 3324089 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0723 14:28:00.574154 3324089 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0723 14:28:00.746368 3324089 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0723 14:28:00.962272 3324089 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0723 14:28:01.467846 3324089 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0723 14:28:01.787231 3324089 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0723 14:28:01.787688 3324089 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-140056 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0723 14:28:02.873810 3324089 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0723 14:28:02.874161 3324089 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-140056 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0723 14:28:03.397669 3324089 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0723 14:28:03.631357 3324089 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0723 14:28:03.960433 3324089 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0723 14:28:03.960732 3324089 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0723 14:28:05.151975 3324089 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0723 14:28:05.397886 3324089 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0723 14:28:06.555296 3324089 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0723 14:28:07.058194 3324089 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0723 14:28:07.338075 3324089 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0723 14:28:07.338724 3324089 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0723 14:28:07.341663 3324089 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0723 14:28:07.343973 3324089 out.go:204]   - Booting up control plane ...
	I0723 14:28:07.344076 3324089 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0723 14:28:07.344156 3324089 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0723 14:28:07.345097 3324089 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0723 14:28:07.356266 3324089 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0723 14:28:07.357308 3324089 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0723 14:28:07.357541 3324089 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0723 14:28:07.451910 3324089 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0723 14:28:07.452000 3324089 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0723 14:28:08.953519 3324089 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.501676588s
	I0723 14:28:08.953613 3324089 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0723 14:28:14.455140 3324089 kubeadm.go:310] [api-check] The API server is healthy after 5.501597014s
	I0723 14:28:14.475262 3324089 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0723 14:28:14.488857 3324089 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0723 14:28:14.510149 3324089 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0723 14:28:14.510349 3324089 kubeadm.go:310] [mark-control-plane] Marking the node addons-140056 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0723 14:28:14.521093 3324089 kubeadm.go:310] [bootstrap-token] Using token: msqdak.i15bkzpxmuc2bwv4
	I0723 14:28:14.523052 3324089 out.go:204]   - Configuring RBAC rules ...
	I0723 14:28:14.523187 3324089 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0723 14:28:14.528276 3324089 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0723 14:28:14.536865 3324089 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0723 14:28:14.543238 3324089 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0723 14:28:14.547350 3324089 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0723 14:28:14.552380 3324089 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0723 14:28:14.863945 3324089 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0723 14:28:15.307264 3324089 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0723 14:28:15.862819 3324089 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0723 14:28:15.863992 3324089 kubeadm.go:310] 
	I0723 14:28:15.864071 3324089 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0723 14:28:15.864088 3324089 kubeadm.go:310] 
	I0723 14:28:15.864164 3324089 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0723 14:28:15.864172 3324089 kubeadm.go:310] 
	I0723 14:28:15.864197 3324089 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0723 14:28:15.864258 3324089 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0723 14:28:15.864310 3324089 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0723 14:28:15.864319 3324089 kubeadm.go:310] 
	I0723 14:28:15.864370 3324089 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0723 14:28:15.864377 3324089 kubeadm.go:310] 
	I0723 14:28:15.864423 3324089 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0723 14:28:15.864431 3324089 kubeadm.go:310] 
	I0723 14:28:15.864481 3324089 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0723 14:28:15.864558 3324089 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0723 14:28:15.864627 3324089 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0723 14:28:15.864634 3324089 kubeadm.go:310] 
	I0723 14:28:15.864716 3324089 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0723 14:28:15.864794 3324089 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0723 14:28:15.864802 3324089 kubeadm.go:310] 
	I0723 14:28:15.864883 3324089 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token msqdak.i15bkzpxmuc2bwv4 \
	I0723 14:28:15.864985 3324089 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d2fc8c293f7a91921409feabe0671bea5964c21341b2c1e458fbfaf2884181ca \
	I0723 14:28:15.865009 3324089 kubeadm.go:310] 	--control-plane 
	I0723 14:28:15.865014 3324089 kubeadm.go:310] 
	I0723 14:28:15.865096 3324089 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0723 14:28:15.865104 3324089 kubeadm.go:310] 
	I0723 14:28:15.865183 3324089 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token msqdak.i15bkzpxmuc2bwv4 \
	I0723 14:28:15.865284 3324089 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d2fc8c293f7a91921409feabe0671bea5964c21341b2c1e458fbfaf2884181ca 
	I0723 14:28:15.868894 3324089 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1065-aws\n", err: exit status 1
	I0723 14:28:15.869025 3324089 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0723 14:28:15.869043 3324089 cni.go:84] Creating CNI manager for ""
	I0723 14:28:15.869050 3324089 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0723 14:28:15.871439 3324089 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0723 14:28:15.873346 3324089 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0723 14:28:15.877424 3324089 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0723 14:28:15.877442 3324089 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0723 14:28:15.895673 3324089 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0723 14:28:16.194145 3324089 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0723 14:28:16.194286 3324089 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:28:16.194377 3324089 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-140056 minikube.k8s.io/updated_at=2024_07_23T14_28_16_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8c09ef4366b737aec8aaa2bbc590bde27da814a6 minikube.k8s.io/name=addons-140056 minikube.k8s.io/primary=true
	I0723 14:28:16.363680 3324089 ops.go:34] apiserver oom_adj: -16
	I0723 14:28:16.363771 3324089 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:28:16.863918 3324089 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:28:17.363973 3324089 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:28:17.864242 3324089 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:28:18.363933 3324089 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:28:18.864687 3324089 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:28:19.364399 3324089 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:28:19.863936 3324089 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:28:20.364712 3324089 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:28:20.863934 3324089 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:28:21.363990 3324089 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:28:21.863981 3324089 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:28:22.364258 3324089 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:28:22.863924 3324089 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:28:23.364531 3324089 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:28:23.864638 3324089 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:28:24.364146 3324089 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:28:24.864490 3324089 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:28:25.363949 3324089 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:28:25.864196 3324089 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:28:26.364622 3324089 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:28:26.864505 3324089 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:28:27.364793 3324089 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:28:27.864747 3324089 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:28:28.363877 3324089 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:28:28.864461 3324089 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:28:28.989454 3324089 kubeadm.go:1113] duration metric: took 12.795219002s to wait for elevateKubeSystemPrivileges
	I0723 14:28:28.989481 3324089 kubeadm.go:394] duration metric: took 30.042478293s to StartCluster
	I0723 14:28:28.989498 3324089 settings.go:142] acquiring lock: {Name:mkc6849065e362533c3a341cb8f31c09fc3ebad1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 14:28:28.990197 3324089 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19319-3317687/kubeconfig
	I0723 14:28:28.990645 3324089 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-3317687/kubeconfig: {Name:mk3abebf3fbbb55a1b61d2bc2eb17945b9b8d937 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 14:28:28.990829 3324089 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0723 14:28:28.990909 3324089 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0723 14:28:28.991153 3324089 config.go:182] Loaded profile config "addons-140056": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 14:28:28.991187 3324089 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0723 14:28:28.991269 3324089 addons.go:69] Setting yakd=true in profile "addons-140056"
	I0723 14:28:28.991294 3324089 addons.go:234] Setting addon yakd=true in "addons-140056"
	I0723 14:28:28.991318 3324089 host.go:66] Checking if "addons-140056" exists ...
	I0723 14:28:28.991756 3324089 cli_runner.go:164] Run: docker container inspect addons-140056 --format={{.State.Status}}
	I0723 14:28:28.992376 3324089 addons.go:69] Setting metrics-server=true in profile "addons-140056"
	I0723 14:28:28.992398 3324089 addons.go:234] Setting addon metrics-server=true in "addons-140056"
	I0723 14:28:28.992423 3324089 host.go:66] Checking if "addons-140056" exists ...
	I0723 14:28:28.992822 3324089 cli_runner.go:164] Run: docker container inspect addons-140056 --format={{.State.Status}}
	I0723 14:28:28.993981 3324089 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-140056"
	I0723 14:28:28.995670 3324089 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-140056"
	I0723 14:28:28.995922 3324089 host.go:66] Checking if "addons-140056" exists ...
	I0723 14:28:28.996792 3324089 out.go:177] * Verifying Kubernetes components...
	I0723 14:28:28.995471 3324089 addons.go:69] Setting cloud-spanner=true in profile "addons-140056"
	I0723 14:28:28.997453 3324089 addons.go:234] Setting addon cloud-spanner=true in "addons-140056"
	I0723 14:28:28.997512 3324089 host.go:66] Checking if "addons-140056" exists ...
	I0723 14:28:28.997881 3324089 cli_runner.go:164] Run: docker container inspect addons-140056 --format={{.State.Status}}
	I0723 14:28:28.995480 3324089 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-140056"
	I0723 14:28:29.003254 3324089 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-140056"
	I0723 14:28:29.003333 3324089 host.go:66] Checking if "addons-140056" exists ...
	I0723 14:28:29.003883 3324089 cli_runner.go:164] Run: docker container inspect addons-140056 --format={{.State.Status}}
	I0723 14:28:29.004615 3324089 cli_runner.go:164] Run: docker container inspect addons-140056 --format={{.State.Status}}
	I0723 14:28:29.005228 3324089 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 14:28:28.995493 3324089 addons.go:69] Setting default-storageclass=true in profile "addons-140056"
	I0723 14:28:29.010628 3324089 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-140056"
	I0723 14:28:29.011003 3324089 cli_runner.go:164] Run: docker container inspect addons-140056 --format={{.State.Status}}
	I0723 14:28:28.995499 3324089 addons.go:69] Setting gcp-auth=true in profile "addons-140056"
	I0723 14:28:29.018870 3324089 mustload.go:65] Loading cluster: addons-140056
	I0723 14:28:29.019063 3324089 config.go:182] Loaded profile config "addons-140056": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 14:28:29.019318 3324089 cli_runner.go:164] Run: docker container inspect addons-140056 --format={{.State.Status}}
	I0723 14:28:28.995509 3324089 addons.go:69] Setting ingress=true in profile "addons-140056"
	I0723 14:28:29.020470 3324089 addons.go:234] Setting addon ingress=true in "addons-140056"
	I0723 14:28:29.020514 3324089 host.go:66] Checking if "addons-140056" exists ...
	I0723 14:28:29.020907 3324089 cli_runner.go:164] Run: docker container inspect addons-140056 --format={{.State.Status}}
	I0723 14:28:28.995516 3324089 addons.go:69] Setting ingress-dns=true in profile "addons-140056"
	I0723 14:28:29.030654 3324089 addons.go:234] Setting addon ingress-dns=true in "addons-140056"
	I0723 14:28:29.030711 3324089 host.go:66] Checking if "addons-140056" exists ...
	I0723 14:28:29.031141 3324089 cli_runner.go:164] Run: docker container inspect addons-140056 --format={{.State.Status}}
	I0723 14:28:28.995621 3324089 addons.go:69] Setting inspektor-gadget=true in profile "addons-140056"
	I0723 14:28:29.067255 3324089 addons.go:234] Setting addon inspektor-gadget=true in "addons-140056"
	I0723 14:28:29.067371 3324089 host.go:66] Checking if "addons-140056" exists ...
	I0723 14:28:29.069707 3324089 cli_runner.go:164] Run: docker container inspect addons-140056 --format={{.State.Status}}
	I0723 14:28:28.995820 3324089 addons.go:69] Setting volcano=true in profile "addons-140056"
	I0723 14:28:29.082299 3324089 addons.go:234] Setting addon volcano=true in "addons-140056"
	I0723 14:28:29.082344 3324089 host.go:66] Checking if "addons-140056" exists ...
	I0723 14:28:29.083589 3324089 cli_runner.go:164] Run: docker container inspect addons-140056 --format={{.State.Status}}
	I0723 14:28:28.995830 3324089 addons.go:69] Setting registry=true in profile "addons-140056"
	I0723 14:28:29.102198 3324089 addons.go:234] Setting addon registry=true in "addons-140056"
	I0723 14:28:29.102235 3324089 host.go:66] Checking if "addons-140056" exists ...
	I0723 14:28:29.102762 3324089 cli_runner.go:164] Run: docker container inspect addons-140056 --format={{.State.Status}}
	I0723 14:28:29.104541 3324089 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0723 14:28:29.112022 3324089 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0723 14:28:29.112105 3324089 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0723 14:28:29.112207 3324089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-140056
	I0723 14:28:28.995837 3324089 addons.go:69] Setting storage-provisioner=true in profile "addons-140056"
	I0723 14:28:29.128635 3324089 addons.go:234] Setting addon storage-provisioner=true in "addons-140056"
	I0723 14:28:29.128679 3324089 host.go:66] Checking if "addons-140056" exists ...
	I0723 14:28:29.129116 3324089 cli_runner.go:164] Run: docker container inspect addons-140056 --format={{.State.Status}}
	I0723 14:28:28.995843 3324089 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-140056"
	I0723 14:28:29.138152 3324089 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-140056"
	I0723 14:28:29.138472 3324089 cli_runner.go:164] Run: docker container inspect addons-140056 --format={{.State.Status}}
	I0723 14:28:28.995868 3324089 addons.go:69] Setting volumesnapshots=true in profile "addons-140056"
	I0723 14:28:29.138610 3324089 addons.go:234] Setting addon volumesnapshots=true in "addons-140056"
	I0723 14:28:29.138644 3324089 host.go:66] Checking if "addons-140056" exists ...
	I0723 14:28:29.138999 3324089 cli_runner.go:164] Run: docker container inspect addons-140056 --format={{.State.Status}}
	I0723 14:28:29.139387 3324089 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0723 14:28:29.162926 3324089 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0723 14:28:29.162948 3324089 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0723 14:28:29.163014 3324089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-140056
	I0723 14:28:29.194749 3324089 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0723 14:28:29.203097 3324089 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0723 14:28:29.203177 3324089 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0723 14:28:29.203272 3324089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-140056
	I0723 14:28:29.212431 3324089 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0723 14:28:29.215493 3324089 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0723 14:28:29.220821 3324089 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0723 14:28:29.227322 3324089 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0723 14:28:29.240237 3324089 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0723 14:28:29.250016 3324089 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0723 14:28:29.250293 3324089 host.go:66] Checking if "addons-140056" exists ...
	I0723 14:28:29.261846 3324089 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.0
	I0723 14:28:29.267377 3324089 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0723 14:28:29.267453 3324089 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0723 14:28:29.267568 3324089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-140056
	I0723 14:28:29.304017 3324089 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0723 14:28:29.308240 3324089 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0723 14:28:29.309955 3324089 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0723 14:28:29.309976 3324089 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0723 14:28:29.310039 3324089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-140056
	I0723 14:28:29.318977 3324089 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0723 14:28:29.324221 3324089 addons.go:234] Setting addon default-storageclass=true in "addons-140056"
	I0723 14:28:29.324263 3324089 host.go:66] Checking if "addons-140056" exists ...
	I0723 14:28:29.324668 3324089 cli_runner.go:164] Run: docker container inspect addons-140056 --format={{.State.Status}}
	I0723 14:28:29.332349 3324089 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0723 14:28:29.332372 3324089 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0723 14:28:29.332443 3324089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-140056
	I0723 14:28:29.335996 3324089 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0723 14:28:29.343955 3324089 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0723 14:28:29.347078 3324089 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 14:28:29.348122 3324089 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	W0723 14:28:29.347350 3324089 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0723 14:28:29.349451 3324089 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0723 14:28:29.349469 3324089 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0723 14:28:29.349535 3324089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-140056
	I0723 14:28:29.355130 3324089 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0723 14:28:29.355154 3324089 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0723 14:28:29.355259 3324089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-140056
	I0723 14:28:29.379556 3324089 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-140056"
	I0723 14:28:29.379602 3324089 host.go:66] Checking if "addons-140056" exists ...
	I0723 14:28:29.379713 3324089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37152 SSHKeyPath:/home/jenkins/minikube-integration/19319-3317687/.minikube/machines/addons-140056/id_rsa Username:docker}
	I0723 14:28:29.379988 3324089 cli_runner.go:164] Run: docker container inspect addons-140056 --format={{.State.Status}}
	I0723 14:28:29.381682 3324089 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0723 14:28:29.386048 3324089 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0723 14:28:29.386068 3324089 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0723 14:28:29.386133 3324089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-140056
	I0723 14:28:29.405683 3324089 out.go:177]   - Using image docker.io/registry:2.8.3
	I0723 14:28:29.410571 3324089 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0723 14:28:29.412537 3324089 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0723 14:28:29.412559 3324089 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0723 14:28:29.412625 3324089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-140056
	I0723 14:28:29.423015 3324089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37152 SSHKeyPath:/home/jenkins/minikube-integration/19319-3317687/.minikube/machines/addons-140056/id_rsa Username:docker}
	I0723 14:28:29.425565 3324089 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0723 14:28:29.430241 3324089 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0723 14:28:29.430264 3324089 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0723 14:28:29.430331 3324089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-140056
	I0723 14:28:29.459051 3324089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37152 SSHKeyPath:/home/jenkins/minikube-integration/19319-3317687/.minikube/machines/addons-140056/id_rsa Username:docker}
	I0723 14:28:29.520715 3324089 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0723 14:28:29.520869 3324089 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0723 14:28:29.531704 3324089 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0723 14:28:29.532100 3324089 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0723 14:28:29.532162 3324089 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0723 14:28:29.532228 3324089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-140056
	I0723 14:28:29.545809 3324089 out.go:177]   - Using image docker.io/busybox:stable
	I0723 14:28:29.548342 3324089 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0723 14:28:29.548405 3324089 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0723 14:28:29.548506 3324089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-140056
	I0723 14:28:29.583962 3324089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37152 SSHKeyPath:/home/jenkins/minikube-integration/19319-3317687/.minikube/machines/addons-140056/id_rsa Username:docker}
	I0723 14:28:29.584517 3324089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37152 SSHKeyPath:/home/jenkins/minikube-integration/19319-3317687/.minikube/machines/addons-140056/id_rsa Username:docker}
	I0723 14:28:29.586330 3324089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37152 SSHKeyPath:/home/jenkins/minikube-integration/19319-3317687/.minikube/machines/addons-140056/id_rsa Username:docker}
	I0723 14:28:29.592116 3324089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37152 SSHKeyPath:/home/jenkins/minikube-integration/19319-3317687/.minikube/machines/addons-140056/id_rsa Username:docker}
	I0723 14:28:29.592866 3324089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37152 SSHKeyPath:/home/jenkins/minikube-integration/19319-3317687/.minikube/machines/addons-140056/id_rsa Username:docker}
	I0723 14:28:29.666961 3324089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37152 SSHKeyPath:/home/jenkins/minikube-integration/19319-3317687/.minikube/machines/addons-140056/id_rsa Username:docker}
	I0723 14:28:29.676082 3324089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37152 SSHKeyPath:/home/jenkins/minikube-integration/19319-3317687/.minikube/machines/addons-140056/id_rsa Username:docker}
	I0723 14:28:29.676501 3324089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37152 SSHKeyPath:/home/jenkins/minikube-integration/19319-3317687/.minikube/machines/addons-140056/id_rsa Username:docker}
	I0723 14:28:29.694746 3324089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37152 SSHKeyPath:/home/jenkins/minikube-integration/19319-3317687/.minikube/machines/addons-140056/id_rsa Username:docker}
	I0723 14:28:29.695223 3324089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37152 SSHKeyPath:/home/jenkins/minikube-integration/19319-3317687/.minikube/machines/addons-140056/id_rsa Username:docker}
	I0723 14:28:29.761473 3324089 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0723 14:28:29.761492 3324089 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0723 14:28:29.811343 3324089 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0723 14:28:29.811364 3324089 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0723 14:28:29.919009 3324089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0723 14:28:29.937213 3324089 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0723 14:28:29.937274 3324089 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0723 14:28:29.972391 3324089 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0723 14:28:29.972412 3324089 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0723 14:28:29.987814 3324089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0723 14:28:30.031105 3324089 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0723 14:28:30.031197 3324089 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0723 14:28:30.074597 3324089 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0723 14:28:30.074676 3324089 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0723 14:28:30.082860 3324089 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0723 14:28:30.082943 3324089 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0723 14:28:30.128176 3324089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0723 14:28:30.134137 3324089 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0723 14:28:30.134214 3324089 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0723 14:28:30.162448 3324089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0723 14:28:30.226257 3324089 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0723 14:28:30.226328 3324089 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0723 14:28:30.236728 3324089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0723 14:28:30.242414 3324089 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0723 14:28:30.242492 3324089 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0723 14:28:30.262736 3324089 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0723 14:28:30.262808 3324089 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0723 14:28:30.266336 3324089 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0723 14:28:30.266404 3324089 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0723 14:28:30.300768 3324089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0723 14:28:30.308266 3324089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0723 14:28:30.312353 3324089 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0723 14:28:30.312424 3324089 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0723 14:28:30.317292 3324089 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0723 14:28:30.317368 3324089 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0723 14:28:30.411842 3324089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0723 14:28:30.426090 3324089 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0723 14:28:30.426358 3324089 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0723 14:28:30.479575 3324089 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0723 14:28:30.479646 3324089 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0723 14:28:30.493134 3324089 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0723 14:28:30.493202 3324089 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0723 14:28:30.500594 3324089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0723 14:28:30.515897 3324089 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0723 14:28:30.515975 3324089 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0723 14:28:30.624533 3324089 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0723 14:28:30.624555 3324089 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0723 14:28:30.678004 3324089 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0723 14:28:30.678025 3324089 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0723 14:28:30.701028 3324089 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0723 14:28:30.701054 3324089 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0723 14:28:30.708150 3324089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0723 14:28:30.813079 3324089 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0723 14:28:30.813153 3324089 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0723 14:28:30.814763 3324089 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0723 14:28:30.814823 3324089 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0723 14:28:30.887186 3324089 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0723 14:28:30.887259 3324089 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0723 14:28:30.951499 3324089 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0723 14:28:30.951571 3324089 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0723 14:28:30.952708 3324089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0723 14:28:30.995086 3324089 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0723 14:28:30.995167 3324089 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0723 14:28:31.014808 3324089 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0723 14:28:31.014881 3324089 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0723 14:28:31.087363 3324089 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0723 14:28:31.087446 3324089 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0723 14:28:31.114206 3324089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0723 14:28:31.179409 3324089 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0723 14:28:31.179482 3324089 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0723 14:28:31.356677 3324089 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0723 14:28:31.356753 3324089 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0723 14:28:31.474028 3324089 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0723 14:28:31.474103 3324089 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0723 14:28:31.581307 3324089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0723 14:28:32.043188 3324089 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.522438345s)
	I0723 14:28:32.043264 3324089 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0723 14:28:32.043430 3324089 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.522546441s)
	I0723 14:28:32.044889 3324089 node_ready.go:35] waiting up to 6m0s for node "addons-140056" to be "Ready" ...
	I0723 14:28:33.747896 3324089 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-140056" context rescaled to 1 replicas
	I0723 14:28:34.224159 3324089 node_ready.go:53] node "addons-140056" has status "Ready":"False"
	I0723 14:28:34.295904 3324089 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.376820362s)
	I0723 14:28:34.295956 3324089 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.308124387s)
	I0723 14:28:34.770317 3324089 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.642063063s)
	I0723 14:28:34.770390 3324089 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.607873289s)
	I0723 14:28:35.654129 3324089 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.417320083s)
	I0723 14:28:35.654165 3324089 addons.go:475] Verifying addon ingress=true in "addons-140056"
	I0723 14:28:35.654322 3324089 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.353479018s)
	I0723 14:28:35.654614 3324089 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.346284987s)
	I0723 14:28:35.654711 3324089 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.242791419s)
	I0723 14:28:35.654729 3324089 addons.go:475] Verifying addon metrics-server=true in "addons-140056"
	I0723 14:28:35.654785 3324089 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.154117857s)
	I0723 14:28:35.654810 3324089 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.946590605s)
	I0723 14:28:35.654822 3324089 addons.go:475] Verifying addon registry=true in "addons-140056"
	I0723 14:28:35.657137 3324089 out.go:177] * Verifying registry addon...
	I0723 14:28:35.657137 3324089 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-140056 service yakd-dashboard -n yakd-dashboard
	
	I0723 14:28:35.657266 3324089 out.go:177] * Verifying ingress addon...
	I0723 14:28:35.660108 3324089 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0723 14:28:35.660993 3324089 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0723 14:28:35.686218 3324089 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0723 14:28:35.686310 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:35.689052 3324089 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0723 14:28:35.689113 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0723 14:28:35.728846 3324089 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0723 14:28:35.859834 3324089 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.907055838s)
	W0723 14:28:35.859892 3324089 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0723 14:28:35.859919 3324089 retry.go:31] will retry after 340.50808ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0723 14:28:35.859992 3324089 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.745711082s)
	I0723 14:28:36.167602 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:36.184086 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:36.201607 3324089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0723 14:28:36.219068 3324089 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.637652027s)
	I0723 14:28:36.219233 3324089 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-140056"
	I0723 14:28:36.221653 3324089 out.go:177] * Verifying csi-hostpath-driver addon...
	I0723 14:28:36.224649 3324089 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0723 14:28:36.258610 3324089 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0723 14:28:36.258719 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:36.574332 3324089 node_ready.go:53] node "addons-140056" has status "Ready":"False"
	I0723 14:28:36.666666 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:36.679392 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:36.741439 3324089 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0723 14:28:36.741516 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:37.168024 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:37.168872 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:37.229332 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:37.665254 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:37.666306 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:37.733599 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:38.165388 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:38.166467 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:38.229916 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:38.520430 3324089 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0723 14:28:38.520534 3324089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-140056
	I0723 14:28:38.540642 3324089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37152 SSHKeyPath:/home/jenkins/minikube-integration/19319-3317687/.minikube/machines/addons-140056/id_rsa Username:docker}
	I0723 14:28:38.669141 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:38.670083 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:38.673296 3324089 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0723 14:28:38.701159 3324089 addons.go:234] Setting addon gcp-auth=true in "addons-140056"
	I0723 14:28:38.701268 3324089 host.go:66] Checking if "addons-140056" exists ...
	I0723 14:28:38.701753 3324089 cli_runner.go:164] Run: docker container inspect addons-140056 --format={{.State.Status}}
	I0723 14:28:38.732720 3324089 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0723 14:28:38.732780 3324089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-140056
	I0723 14:28:38.753798 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:38.761351 3324089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37152 SSHKeyPath:/home/jenkins/minikube-integration/19319-3317687/.minikube/machines/addons-140056/id_rsa Username:docker}
	I0723 14:28:39.048849 3324089 node_ready.go:53] node "addons-140056" has status "Ready":"False"
	I0723 14:28:39.165944 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:39.169123 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:39.232268 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:39.351737 3324089 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.150032093s)
	I0723 14:28:39.354864 3324089 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0723 14:28:39.357624 3324089 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0723 14:28:39.360110 3324089 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0723 14:28:39.360145 3324089 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0723 14:28:39.386963 3324089 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0723 14:28:39.386989 3324089 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0723 14:28:39.409847 3324089 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0723 14:28:39.409868 3324089 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0723 14:28:39.429228 3324089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0723 14:28:39.665679 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:39.668855 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:39.755233 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:40.095932 3324089 addons.go:475] Verifying addon gcp-auth=true in "addons-140056"
	I0723 14:28:40.098894 3324089 out.go:177] * Verifying gcp-auth addon...
	I0723 14:28:40.103060 3324089 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0723 14:28:40.110068 3324089 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0723 14:28:40.110096 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:28:40.165912 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:40.166968 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:40.229478 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:40.606653 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:28:40.665170 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:40.665819 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:40.732165 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:41.106987 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:28:41.165718 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:41.166114 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:41.228755 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:41.549171 3324089 node_ready.go:53] node "addons-140056" has status "Ready":"False"
	I0723 14:28:41.606517 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:28:41.664302 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:41.666610 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:41.745737 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:42.107460 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:28:42.166379 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:42.167708 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:42.234425 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:42.607561 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:28:42.665139 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:42.665818 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:42.730361 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:43.107239 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:28:43.164223 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:43.165945 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:43.229118 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:43.607319 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:28:43.665202 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:43.665873 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:43.732421 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:44.048964 3324089 node_ready.go:53] node "addons-140056" has status "Ready":"False"
	I0723 14:28:44.107343 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:28:44.164385 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:44.165397 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:44.229551 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:44.607086 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:28:44.665078 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:44.665989 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:44.730358 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:45.107591 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:28:45.166278 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:45.167575 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:45.231331 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:45.606724 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:28:45.664995 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:45.665787 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:45.732895 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:46.106773 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:28:46.165171 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:46.165912 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:46.229399 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:46.547780 3324089 node_ready.go:53] node "addons-140056" has status "Ready":"False"
	I0723 14:28:46.607350 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:28:46.665389 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:46.666303 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:46.731525 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:47.106209 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:28:47.163895 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:47.164884 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:47.229095 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:47.606775 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:28:47.665380 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:47.665692 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:47.731917 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:48.107364 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:28:48.165392 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:48.165913 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:48.231232 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:48.548818 3324089 node_ready.go:53] node "addons-140056" has status "Ready":"False"
	I0723 14:28:48.606371 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:28:48.663730 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:48.664858 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:48.732509 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:49.106165 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:28:49.164058 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:49.164818 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:49.229117 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:49.607330 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:28:49.665401 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:49.666284 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:49.729638 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:50.106807 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:28:50.165362 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:50.165819 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:50.229533 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:50.607636 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:28:50.664117 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:50.665605 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:50.732603 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:51.048839 3324089 node_ready.go:53] node "addons-140056" has status "Ready":"False"
	I0723 14:28:51.106900 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:28:51.165072 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:51.165775 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:51.229311 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:51.606451 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:28:51.664997 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:51.665815 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:51.731796 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:52.106808 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:28:52.164077 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:52.165278 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:52.229558 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:52.607030 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:28:52.664333 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:52.666139 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:52.733276 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:53.048995 3324089 node_ready.go:53] node "addons-140056" has status "Ready":"False"
	I0723 14:28:53.106794 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:28:53.165868 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:53.166197 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:53.229360 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:53.607214 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:28:53.665993 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:53.666208 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:53.731640 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:54.107271 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:28:54.165328 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:54.166364 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:54.229079 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:54.606705 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:28:54.664672 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:54.666352 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:54.730467 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:55.106965 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:28:55.164906 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:55.166502 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:55.229372 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:55.548023 3324089 node_ready.go:53] node "addons-140056" has status "Ready":"False"
	I0723 14:28:55.606864 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:28:55.665603 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:55.666618 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:55.731098 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:56.107162 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:28:56.165652 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:56.166376 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:56.229557 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:56.606713 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:28:56.665178 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:56.666155 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:56.734171 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:57.107078 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:28:57.165179 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:57.165612 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:57.229793 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:57.548711 3324089 node_ready.go:53] node "addons-140056" has status "Ready":"False"
	I0723 14:28:57.606958 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:28:57.664637 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:57.665765 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:57.729742 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:58.106906 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:28:58.165461 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:58.165525 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:58.229658 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:58.609691 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:28:58.669572 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:58.675975 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:58.732754 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:59.106766 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:28:59.164275 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:59.165759 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:59.230804 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:59.548907 3324089 node_ready.go:53] node "addons-140056" has status "Ready":"False"
	I0723 14:28:59.606666 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:28:59.664947 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:59.667200 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:59.731305 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:00.111249 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:00.171082 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:00.172588 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:00.229939 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:00.606675 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:00.664706 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:00.665078 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:00.731104 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:01.106469 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:01.164107 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:01.165999 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:01.229151 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:01.607043 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:01.665707 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:01.666017 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:01.729987 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:02.049504 3324089 node_ready.go:53] node "addons-140056" has status "Ready":"False"
	I0723 14:29:02.106841 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:02.165353 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:02.165892 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:02.228534 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:02.606739 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:02.664844 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:02.665248 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:02.732110 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:03.107204 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:03.165784 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:03.166488 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:03.229182 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:03.607044 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:03.665282 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:03.666260 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:03.731968 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:04.106472 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:04.164198 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:04.165513 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:04.229530 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:04.548715 3324089 node_ready.go:53] node "addons-140056" has status "Ready":"False"
	I0723 14:29:04.606846 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:04.665114 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:04.665553 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:04.730698 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:05.107656 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:05.164517 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:05.165706 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:05.229175 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:05.607378 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:05.664714 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:05.666792 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:05.731569 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:06.107025 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:06.163700 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:06.165357 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:06.229224 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:06.606919 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:06.665290 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:06.665848 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:06.732829 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:07.048225 3324089 node_ready.go:53] node "addons-140056" has status "Ready":"False"
	I0723 14:29:07.107686 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:07.165538 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:07.165988 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:07.229364 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:07.606902 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:07.665906 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:07.666169 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:07.731979 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:08.106294 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:08.164356 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:08.165766 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:08.229657 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:08.606386 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:08.665426 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:08.665867 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:08.732433 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:09.048948 3324089 node_ready.go:53] node "addons-140056" has status "Ready":"False"
	I0723 14:29:09.110145 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:09.165886 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:09.167365 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:09.229640 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:09.607007 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:09.665195 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:09.665769 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:09.731530 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:10.106883 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:10.165353 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:10.165655 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:10.229690 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:10.606443 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:10.664438 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:10.664971 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:10.732580 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:11.107481 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:11.164761 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:11.166934 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:11.230185 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:11.547871 3324089 node_ready.go:53] node "addons-140056" has status "Ready":"False"
	I0723 14:29:11.606939 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:11.665401 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:11.666307 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:11.731554 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:12.106659 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:12.164695 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:12.165566 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:12.229326 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:12.607003 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:12.663802 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:12.664988 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:12.731844 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:13.106263 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:13.163893 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:13.164969 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:13.229471 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:13.548107 3324089 node_ready.go:53] node "addons-140056" has status "Ready":"False"
	I0723 14:29:13.607459 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:13.664842 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:13.665631 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:13.732525 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:14.106483 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:14.165066 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:14.166177 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:14.229209 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:14.606713 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:14.664189 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:14.665923 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:14.732446 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:15.107250 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:15.164492 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:15.166663 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:15.276712 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:15.568390 3324089 node_ready.go:49] node "addons-140056" has status "Ready":"True"
	I0723 14:29:15.568416 3324089 node_ready.go:38] duration metric: took 43.523493377s for node "addons-140056" to be "Ready" ...
	I0723 14:29:15.568427 3324089 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 14:29:15.596495 3324089 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jgz96" in "kube-system" namespace to be "Ready" ...
	I0723 14:29:15.639962 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:15.685723 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:15.688582 3324089 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0723 14:29:15.688607 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:15.739288 3324089 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0723 14:29:15.739317 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:16.106484 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:16.165137 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:16.167380 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:16.234712 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:16.609531 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:16.670334 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:16.671612 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:16.733293 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:17.115223 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:17.182748 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:17.184104 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:17.239656 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:17.604379 3324089 pod_ready.go:102] pod "coredns-7db6d8ff4d-jgz96" in "kube-system" namespace has status "Ready":"False"
	I0723 14:29:17.608174 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:17.666255 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:17.671544 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:17.741769 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:18.106676 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:18.166239 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:18.168200 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:18.230859 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:18.603441 3324089 pod_ready.go:92] pod "coredns-7db6d8ff4d-jgz96" in "kube-system" namespace has status "Ready":"True"
	I0723 14:29:18.603597 3324089 pod_ready.go:81] duration metric: took 3.007065828s for pod "coredns-7db6d8ff4d-jgz96" in "kube-system" namespace to be "Ready" ...
	I0723 14:29:18.603639 3324089 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-140056" in "kube-system" namespace to be "Ready" ...
	I0723 14:29:18.609209 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:18.618987 3324089 pod_ready.go:92] pod "etcd-addons-140056" in "kube-system" namespace has status "Ready":"True"
	I0723 14:29:18.619056 3324089 pod_ready.go:81] duration metric: took 15.397479ms for pod "etcd-addons-140056" in "kube-system" namespace to be "Ready" ...
	I0723 14:29:18.619085 3324089 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-140056" in "kube-system" namespace to be "Ready" ...
	I0723 14:29:18.628850 3324089 pod_ready.go:92] pod "kube-apiserver-addons-140056" in "kube-system" namespace has status "Ready":"True"
	I0723 14:29:18.628920 3324089 pod_ready.go:81] duration metric: took 9.814929ms for pod "kube-apiserver-addons-140056" in "kube-system" namespace to be "Ready" ...
	I0723 14:29:18.628946 3324089 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-140056" in "kube-system" namespace to be "Ready" ...
	I0723 14:29:18.642868 3324089 pod_ready.go:92] pod "kube-controller-manager-addons-140056" in "kube-system" namespace has status "Ready":"True"
	I0723 14:29:18.642940 3324089 pod_ready.go:81] duration metric: took 13.974094ms for pod "kube-controller-manager-addons-140056" in "kube-system" namespace to be "Ready" ...
	I0723 14:29:18.642968 3324089 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qch7m" in "kube-system" namespace to be "Ready" ...
	I0723 14:29:18.650639 3324089 pod_ready.go:92] pod "kube-proxy-qch7m" in "kube-system" namespace has status "Ready":"True"
	I0723 14:29:18.650709 3324089 pod_ready.go:81] duration metric: took 7.720102ms for pod "kube-proxy-qch7m" in "kube-system" namespace to be "Ready" ...
	I0723 14:29:18.650735 3324089 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-140056" in "kube-system" namespace to be "Ready" ...
	I0723 14:29:18.673107 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:18.677199 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:18.753580 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:19.001465 3324089 pod_ready.go:92] pod "kube-scheduler-addons-140056" in "kube-system" namespace has status "Ready":"True"
	I0723 14:29:19.001550 3324089 pod_ready.go:81] duration metric: took 350.794651ms for pod "kube-scheduler-addons-140056" in "kube-system" namespace to be "Ready" ...
	I0723 14:29:19.001577 3324089 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-c59844bb4-ql9z2" in "kube-system" namespace to be "Ready" ...
	I0723 14:29:19.106899 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:19.168194 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:19.169043 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:19.230341 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:19.607103 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:19.667553 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:19.668905 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:19.734397 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:20.107964 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:20.167246 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:20.173380 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:20.230865 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:20.606888 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:20.673106 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:20.674225 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:20.734910 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:21.010344 3324089 pod_ready.go:102] pod "metrics-server-c59844bb4-ql9z2" in "kube-system" namespace has status "Ready":"False"
	I0723 14:29:21.107271 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:21.166094 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:21.167482 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:21.231411 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:21.608086 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:21.666637 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:21.670300 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:21.760672 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:22.109071 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:22.167258 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:22.168751 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:22.229917 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:22.607410 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:22.668695 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:22.670013 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:22.734485 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:23.107395 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:23.166553 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:23.166912 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:23.230597 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:23.508845 3324089 pod_ready.go:102] pod "metrics-server-c59844bb4-ql9z2" in "kube-system" namespace has status "Ready":"False"
	I0723 14:29:23.607434 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:23.665824 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:23.667082 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:23.747057 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:24.106463 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:24.167976 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:24.169441 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:24.231779 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:24.606863 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:24.667796 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:24.669272 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:24.731076 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:25.107050 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:25.168171 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:25.170494 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:25.231943 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:25.607263 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:25.668994 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:25.670447 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:25.733126 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:26.015291 3324089 pod_ready.go:102] pod "metrics-server-c59844bb4-ql9z2" in "kube-system" namespace has status "Ready":"False"
	I0723 14:29:26.108018 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:26.170056 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:26.170921 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:26.231483 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:26.607415 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:26.676839 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:26.677419 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:26.742321 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:27.107537 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:27.170984 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:27.171614 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:27.230400 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:27.607121 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:27.666109 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:27.667633 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:27.742457 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:28.107000 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:28.167033 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:28.168705 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:28.231011 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:28.507681 3324089 pod_ready.go:102] pod "metrics-server-c59844bb4-ql9z2" in "kube-system" namespace has status "Ready":"False"
	I0723 14:29:28.606707 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:28.665455 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:28.667175 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:28.732610 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:29.106975 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:29.164996 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:29.166866 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:29.230740 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:29.606885 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:29.665599 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:29.666718 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:29.735206 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:30.109589 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:30.168940 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:30.171452 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:30.230213 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:30.508928 3324089 pod_ready.go:102] pod "metrics-server-c59844bb4-ql9z2" in "kube-system" namespace has status "Ready":"False"
	I0723 14:29:30.606324 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:30.676509 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:30.678028 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:30.735171 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:31.107557 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:31.166562 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:31.167345 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:31.230364 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:31.607188 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:31.665834 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:31.666981 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:31.733177 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:32.106953 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:32.165879 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:32.166788 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:32.234432 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:32.607205 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:32.665447 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:32.667187 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:32.740633 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:33.011139 3324089 pod_ready.go:102] pod "metrics-server-c59844bb4-ql9z2" in "kube-system" namespace has status "Ready":"False"
	I0723 14:29:33.107046 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:33.168298 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:33.168563 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:33.233264 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:33.607013 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:33.665614 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:33.666728 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:33.730882 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:34.107224 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:34.166632 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:34.167629 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:34.229743 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:34.608947 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:34.670117 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:34.673831 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:34.734053 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:35.011776 3324089 pod_ready.go:102] pod "metrics-server-c59844bb4-ql9z2" in "kube-system" namespace has status "Ready":"False"
	I0723 14:29:35.107644 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:35.190948 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:35.193302 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:35.239003 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:35.607973 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:35.667072 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:35.668454 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:35.733804 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:36.107000 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:36.165478 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:36.167053 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:36.231339 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:36.607041 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:36.674222 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:36.677447 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:36.754255 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:37.108029 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:37.169593 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:37.173669 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:37.230040 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:37.507541 3324089 pod_ready.go:102] pod "metrics-server-c59844bb4-ql9z2" in "kube-system" namespace has status "Ready":"False"
	I0723 14:29:37.606743 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:37.668033 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:37.669174 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:37.734656 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:38.107129 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:38.165147 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:38.167286 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:38.231078 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:38.608032 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:38.666714 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:38.669568 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:38.730487 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:39.107423 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:39.169271 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:39.170778 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:39.230856 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:39.508695 3324089 pod_ready.go:102] pod "metrics-server-c59844bb4-ql9z2" in "kube-system" namespace has status "Ready":"False"
	I0723 14:29:39.606912 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:39.668053 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:39.669523 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:39.746238 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:40.107049 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:40.166942 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:40.167697 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:40.230518 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:40.608450 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:40.665527 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:40.667361 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:40.733400 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:41.108394 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:41.166738 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:41.168708 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:41.230510 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:41.614228 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:41.668501 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:41.669255 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:41.733332 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:42.015525 3324089 pod_ready.go:102] pod "metrics-server-c59844bb4-ql9z2" in "kube-system" namespace has status "Ready":"False"
	I0723 14:29:42.107124 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:42.166839 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:42.168535 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:42.234672 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:42.644861 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:42.688306 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:42.688738 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:42.744457 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:43.106816 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:43.165836 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:43.170317 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:43.231743 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:43.606684 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:43.671241 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:43.671890 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:43.735551 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:44.106809 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:44.166747 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:44.170728 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:44.230025 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:44.507868 3324089 pod_ready.go:102] pod "metrics-server-c59844bb4-ql9z2" in "kube-system" namespace has status "Ready":"False"
	I0723 14:29:44.629047 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:44.666809 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:44.668521 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:44.748015 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:45.107967 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:45.167564 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:45.170486 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:45.232867 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:45.607328 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:45.667026 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:45.668752 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:45.732927 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:46.106430 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:46.166958 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:46.167900 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:46.229770 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:46.508063 3324089 pod_ready.go:102] pod "metrics-server-c59844bb4-ql9z2" in "kube-system" namespace has status "Ready":"False"
	I0723 14:29:46.606658 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:46.669774 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:46.670662 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:46.733301 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:47.107309 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:47.165334 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:47.166130 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:47.231095 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:47.607251 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:47.666718 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:47.669818 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:47.734257 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:48.107973 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:48.173468 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:48.175934 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:48.232526 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:48.514823 3324089 pod_ready.go:102] pod "metrics-server-c59844bb4-ql9z2" in "kube-system" namespace has status "Ready":"False"
	I0723 14:29:48.607639 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:48.666009 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:48.668447 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:48.765942 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:49.107121 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:49.168165 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:49.170788 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:49.231425 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:49.606983 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:49.673241 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:49.674443 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:49.745181 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:50.107351 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:50.167252 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:50.171882 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:50.231058 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:50.613019 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:50.679642 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:50.680861 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:50.741092 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:51.012605 3324089 pod_ready.go:102] pod "metrics-server-c59844bb4-ql9z2" in "kube-system" namespace has status "Ready":"False"
	I0723 14:29:51.107616 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:51.170622 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:51.181087 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:51.231367 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:51.607144 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:51.668862 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:51.669727 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:51.730592 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:52.106771 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:52.167686 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:52.168832 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:52.231020 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:52.608334 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:52.668901 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:52.672755 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:52.737160 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:53.020360 3324089 pod_ready.go:102] pod "metrics-server-c59844bb4-ql9z2" in "kube-system" namespace has status "Ready":"False"
	I0723 14:29:53.107234 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:53.165310 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:53.169939 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:53.230912 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:53.607282 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:53.667645 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:53.668708 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:53.734626 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:54.107148 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:54.165260 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:54.167214 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:54.231896 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:54.606773 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:54.687550 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:54.688689 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:54.734811 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:55.107293 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:55.166313 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:55.168305 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:55.244174 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:55.508348 3324089 pod_ready.go:102] pod "metrics-server-c59844bb4-ql9z2" in "kube-system" namespace has status "Ready":"False"
	I0723 14:29:55.607772 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:55.666749 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:55.668376 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:55.747411 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:56.107236 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:56.166373 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:56.168253 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:56.231743 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:56.607397 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:56.670168 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:56.672374 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:56.755838 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:57.108245 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:57.165971 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:57.168416 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:57.231111 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:57.549140 3324089 pod_ready.go:102] pod "metrics-server-c59844bb4-ql9z2" in "kube-system" namespace has status "Ready":"False"
	I0723 14:29:57.607775 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:57.670699 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:57.676504 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:57.738186 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:58.106950 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:58.166667 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:58.167138 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:58.231233 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:58.607966 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:58.667501 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:58.668166 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:58.733932 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:59.110567 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:59.167183 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:59.167848 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:59.230740 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:59.606812 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:59.666135 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:59.666865 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:59.734014 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:00.029621 3324089 pod_ready.go:102] pod "metrics-server-c59844bb4-ql9z2" in "kube-system" namespace has status "Ready":"False"
	I0723 14:30:00.147640 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:30:00.171909 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:30:00.176716 3324089 kapi.go:107] duration metric: took 1m24.516605404s to wait for kubernetes.io/minikube-addons=registry ...
	I0723 14:30:00.244083 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:00.606779 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:30:00.665632 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:30:00.738144 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:01.107861 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:30:01.167567 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:30:01.245319 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:01.608114 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:30:01.667960 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:30:01.742016 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:02.107911 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:30:02.167450 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:30:02.235281 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:02.508864 3324089 pod_ready.go:102] pod "metrics-server-c59844bb4-ql9z2" in "kube-system" namespace has status "Ready":"False"
	I0723 14:30:02.608596 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:30:02.666929 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:30:02.748870 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:03.107410 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:30:03.166130 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:30:03.230627 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:03.606989 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:30:03.666816 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:30:03.737855 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:04.107122 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:30:04.166504 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:30:04.231366 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:04.611102 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:30:04.665580 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:30:04.733732 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:05.011186 3324089 pod_ready.go:102] pod "metrics-server-c59844bb4-ql9z2" in "kube-system" namespace has status "Ready":"False"
	I0723 14:30:05.106735 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:30:05.166109 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:30:05.231303 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:05.606793 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:30:05.666188 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:30:05.732969 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:06.107362 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:30:06.169635 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:30:06.231469 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:06.607260 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:30:06.666208 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:30:06.735411 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:07.011455 3324089 pod_ready.go:102] pod "metrics-server-c59844bb4-ql9z2" in "kube-system" namespace has status "Ready":"False"
	I0723 14:30:07.106455 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:30:07.165890 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:30:07.231077 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:07.607618 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:30:07.666103 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:30:07.737041 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:08.106785 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:30:08.165833 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:30:08.230569 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:08.606919 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:30:08.666590 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:30:08.731441 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:09.018089 3324089 pod_ready.go:102] pod "metrics-server-c59844bb4-ql9z2" in "kube-system" namespace has status "Ready":"False"
	I0723 14:30:09.107391 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:30:09.166923 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:30:09.230648 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:09.608322 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:30:09.676226 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:30:09.746255 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:10.108490 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:30:10.166001 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:30:10.230646 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:10.607057 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:30:10.665982 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:30:10.731885 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:11.108913 3324089 kapi.go:107] duration metric: took 1m31.005853741s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0723 14:30:11.111047 3324089 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-140056 cluster.
	I0723 14:30:11.112742 3324089 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0723 14:30:11.114776 3324089 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0723 14:30:11.165225 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:30:11.230820 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:11.507989 3324089 pod_ready.go:102] pod "metrics-server-c59844bb4-ql9z2" in "kube-system" namespace has status "Ready":"False"
	I0723 14:30:11.665801 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:30:11.732818 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:12.166482 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:30:12.235645 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:12.665789 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:30:12.753940 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:13.166576 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:30:13.231254 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:13.508197 3324089 pod_ready.go:102] pod "metrics-server-c59844bb4-ql9z2" in "kube-system" namespace has status "Ready":"False"
	I0723 14:30:13.666293 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:30:13.765088 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:14.166747 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:30:14.231102 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:14.667120 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:30:14.738637 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:15.168771 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:30:15.237172 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:15.509196 3324089 pod_ready.go:102] pod "metrics-server-c59844bb4-ql9z2" in "kube-system" namespace has status "Ready":"False"
	I0723 14:30:15.665800 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:30:15.741918 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:16.165792 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:30:16.230965 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:16.678379 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:30:16.733988 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:17.165979 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:30:17.230164 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:17.509523 3324089 pod_ready.go:102] pod "metrics-server-c59844bb4-ql9z2" in "kube-system" namespace has status "Ready":"False"
	I0723 14:30:17.666647 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:30:17.736984 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:18.165810 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:30:18.229971 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:18.665542 3324089 kapi.go:107] duration metric: took 1m43.004543556s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0723 14:30:18.741009 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:19.231193 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:19.512344 3324089 pod_ready.go:102] pod "metrics-server-c59844bb4-ql9z2" in "kube-system" namespace has status "Ready":"False"
	I0723 14:30:19.733452 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:20.232648 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:20.737023 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:21.230271 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:21.733281 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:22.009667 3324089 pod_ready.go:102] pod "metrics-server-c59844bb4-ql9z2" in "kube-system" namespace has status "Ready":"False"
	I0723 14:30:22.230962 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:22.733529 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:23.234261 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:23.733482 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:24.011688 3324089 pod_ready.go:102] pod "metrics-server-c59844bb4-ql9z2" in "kube-system" namespace has status "Ready":"False"
	I0723 14:30:24.231487 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:24.735094 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:25.232531 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:25.735478 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:26.011909 3324089 pod_ready.go:102] pod "metrics-server-c59844bb4-ql9z2" in "kube-system" namespace has status "Ready":"False"
	I0723 14:30:26.231159 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:26.734492 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:27.231412 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:27.732602 3324089 kapi.go:107] duration metric: took 1m51.507947444s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0723 14:30:27.734916 3324089 out.go:177] * Enabled addons: cloud-spanner, nvidia-device-plugin, storage-provisioner, ingress-dns, metrics-server, yakd, storage-provisioner-rancher, inspektor-gadget, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I0723 14:30:27.736557 3324089 addons.go:510] duration metric: took 1m58.745359938s for enable addons: enabled=[cloud-spanner nvidia-device-plugin storage-provisioner ingress-dns metrics-server yakd storage-provisioner-rancher inspektor-gadget volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I0723 14:30:28.013377 3324089 pod_ready.go:102] pod "metrics-server-c59844bb4-ql9z2" in "kube-system" namespace has status "Ready":"False"
	I0723 14:30:30.029288 3324089 pod_ready.go:102] pod "metrics-server-c59844bb4-ql9z2" in "kube-system" namespace has status "Ready":"False"
	I0723 14:30:32.508108 3324089 pod_ready.go:102] pod "metrics-server-c59844bb4-ql9z2" in "kube-system" namespace has status "Ready":"False"
	I0723 14:30:35.011351 3324089 pod_ready.go:102] pod "metrics-server-c59844bb4-ql9z2" in "kube-system" namespace has status "Ready":"False"
	I0723 14:30:36.010078 3324089 pod_ready.go:92] pod "metrics-server-c59844bb4-ql9z2" in "kube-system" namespace has status "Ready":"True"
	I0723 14:30:36.010110 3324089 pod_ready.go:81] duration metric: took 1m17.008512698s for pod "metrics-server-c59844bb4-ql9z2" in "kube-system" namespace to be "Ready" ...
	I0723 14:30:36.010124 3324089 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-rhfcp" in "kube-system" namespace to be "Ready" ...
	I0723 14:30:36.016710 3324089 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-rhfcp" in "kube-system" namespace has status "Ready":"True"
	I0723 14:30:36.016739 3324089 pod_ready.go:81] duration metric: took 6.604634ms for pod "nvidia-device-plugin-daemonset-rhfcp" in "kube-system" namespace to be "Ready" ...
	I0723 14:30:36.016762 3324089 pod_ready.go:38] duration metric: took 1m20.448322356s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 14:30:36.018002 3324089 api_server.go:52] waiting for apiserver process to appear ...
	I0723 14:30:36.019758 3324089 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 14:30:36.019850 3324089 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 14:30:36.072301 3324089 cri.go:89] found id: "a958daba0b9ba0c8d9d5c5311f8098a5bc3c2438bfa54b6e90f10aa09e37fd91"
	I0723 14:30:36.072324 3324089 cri.go:89] found id: ""
	I0723 14:30:36.072332 3324089 logs.go:276] 1 containers: [a958daba0b9ba0c8d9d5c5311f8098a5bc3c2438bfa54b6e90f10aa09e37fd91]
	I0723 14:30:36.072801 3324089 ssh_runner.go:195] Run: which crictl
	I0723 14:30:36.077756 3324089 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 14:30:36.077837 3324089 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 14:30:36.122182 3324089 cri.go:89] found id: "137c42a93cc7c36e96ae7e6a68be283b89d997b0b6f8ea281cc17fcd3f3eb8c9"
	I0723 14:30:36.122206 3324089 cri.go:89] found id: ""
	I0723 14:30:36.122214 3324089 logs.go:276] 1 containers: [137c42a93cc7c36e96ae7e6a68be283b89d997b0b6f8ea281cc17fcd3f3eb8c9]
	I0723 14:30:36.122278 3324089 ssh_runner.go:195] Run: which crictl
	I0723 14:30:36.126074 3324089 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 14:30:36.126192 3324089 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 14:30:36.173196 3324089 cri.go:89] found id: "0c63da520ba2b80db592eab81fc2f6f0721e424d20beb381aa34e4fc2e6cb76c"
	I0723 14:30:36.173217 3324089 cri.go:89] found id: ""
	I0723 14:30:36.173229 3324089 logs.go:276] 1 containers: [0c63da520ba2b80db592eab81fc2f6f0721e424d20beb381aa34e4fc2e6cb76c]
	I0723 14:30:36.173292 3324089 ssh_runner.go:195] Run: which crictl
	I0723 14:30:36.176835 3324089 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 14:30:36.176908 3324089 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 14:30:36.216340 3324089 cri.go:89] found id: "54c3777af6f927748ddcbd45b75cad5d3d1a83ae959204c489cb9fd6611b442d"
	I0723 14:30:36.216361 3324089 cri.go:89] found id: ""
	I0723 14:30:36.216369 3324089 logs.go:276] 1 containers: [54c3777af6f927748ddcbd45b75cad5d3d1a83ae959204c489cb9fd6611b442d]
	I0723 14:30:36.216451 3324089 ssh_runner.go:195] Run: which crictl
	I0723 14:30:36.220470 3324089 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 14:30:36.220550 3324089 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 14:30:36.271330 3324089 cri.go:89] found id: "82396ebc6d4766b0d7b5d3fd339d8d061a527ae22c008086b632902c95850437"
	I0723 14:30:36.271401 3324089 cri.go:89] found id: ""
	I0723 14:30:36.271422 3324089 logs.go:276] 1 containers: [82396ebc6d4766b0d7b5d3fd339d8d061a527ae22c008086b632902c95850437]
	I0723 14:30:36.271511 3324089 ssh_runner.go:195] Run: which crictl
	I0723 14:30:36.275008 3324089 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 14:30:36.275115 3324089 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 14:30:36.315432 3324089 cri.go:89] found id: "a58f73816b73096e221fa39bf12174e422e029bcf2493236fd089866bc393967"
	I0723 14:30:36.315455 3324089 cri.go:89] found id: ""
	I0723 14:30:36.315465 3324089 logs.go:276] 1 containers: [a58f73816b73096e221fa39bf12174e422e029bcf2493236fd089866bc393967]
	I0723 14:30:36.315525 3324089 ssh_runner.go:195] Run: which crictl
	I0723 14:30:36.319009 3324089 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 14:30:36.319082 3324089 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 14:30:36.360882 3324089 cri.go:89] found id: "bdb361c9cd9a14e56cb6d358f6ab2176e483982c5537b445da9ad7f02031b57f"
	I0723 14:30:36.360903 3324089 cri.go:89] found id: ""
	I0723 14:30:36.360911 3324089 logs.go:276] 1 containers: [bdb361c9cd9a14e56cb6d358f6ab2176e483982c5537b445da9ad7f02031b57f]
	I0723 14:30:36.360967 3324089 ssh_runner.go:195] Run: which crictl
	I0723 14:30:36.365330 3324089 logs.go:123] Gathering logs for etcd [137c42a93cc7c36e96ae7e6a68be283b89d997b0b6f8ea281cc17fcd3f3eb8c9] ...
	I0723 14:30:36.365358 3324089 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 137c42a93cc7c36e96ae7e6a68be283b89d997b0b6f8ea281cc17fcd3f3eb8c9"
	I0723 14:30:36.437161 3324089 logs.go:123] Gathering logs for kube-scheduler [54c3777af6f927748ddcbd45b75cad5d3d1a83ae959204c489cb9fd6611b442d] ...
	I0723 14:30:36.437198 3324089 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54c3777af6f927748ddcbd45b75cad5d3d1a83ae959204c489cb9fd6611b442d"
	I0723 14:30:36.500359 3324089 logs.go:123] Gathering logs for kube-controller-manager [a58f73816b73096e221fa39bf12174e422e029bcf2493236fd089866bc393967] ...
	I0723 14:30:36.500394 3324089 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a58f73816b73096e221fa39bf12174e422e029bcf2493236fd089866bc393967"
	I0723 14:30:36.571986 3324089 logs.go:123] Gathering logs for kindnet [bdb361c9cd9a14e56cb6d358f6ab2176e483982c5537b445da9ad7f02031b57f] ...
	I0723 14:30:36.572023 3324089 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bdb361c9cd9a14e56cb6d358f6ab2176e483982c5537b445da9ad7f02031b57f"
	I0723 14:30:36.626364 3324089 logs.go:123] Gathering logs for kubelet ...
	I0723 14:30:36.626400 3324089 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0723 14:30:36.661816 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: W0723 14:29:15.261241    1548 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-140056' and this object
	W0723 14:30:36.662127 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.261372    1548 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-140056' and this object
	W0723 14:30:36.664485 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: W0723 14:29:15.271322    1548 reflector.go:547] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-140056' and this object
	W0723 14:30:36.664698 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.271363    1548 reflector.go:150] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-140056' and this object
	W0723 14:30:36.664879 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: W0723 14:29:15.271626    1548 reflector.go:547] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-140056' and this object
	W0723 14:30:36.665081 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.271661    1548 reflector.go:150] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-140056' and this object
	W0723 14:30:36.665270 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: W0723 14:29:15.271627    1548 reflector.go:547] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-140056" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-140056' and this object
	W0723 14:30:36.665462 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.271686    1548 reflector.go:150] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-140056" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-140056' and this object
	W0723 14:30:36.667218 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: W0723 14:29:15.302024    1548 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-140056' and this object
	W0723 14:30:36.667442 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.302075    1548 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-140056' and this object
	W0723 14:30:36.667615 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: W0723 14:29:15.302140    1548 reflector.go:547] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-140056" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-140056' and this object
	W0723 14:30:36.667807 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.302153    1548 reflector.go:150] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-140056" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-140056' and this object
	W0723 14:30:36.667993 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: W0723 14:29:15.302194    1548 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-140056' and this object
	W0723 14:30:36.668198 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.302209    1548 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-140056' and this object
	W0723 14:30:36.668376 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: W0723 14:29:15.302254    1548 reflector.go:547] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-140056' and this object
	W0723 14:30:36.668575 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.302266    1548 reflector.go:150] object-"gcp-auth"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-140056' and this object
	W0723 14:30:36.668763 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: W0723 14:29:15.302313    1548 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-140056' and this object
	W0723 14:30:36.668967 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.302324    1548 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-140056' and this object
	W0723 14:30:36.669153 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: W0723 14:29:15.302381    1548 reflector.go:547] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-140056" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-140056' and this object
	W0723 14:30:36.669366 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.302392    1548 reflector.go:150] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-140056" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-140056' and this object
	W0723 14:30:36.669687 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: W0723 14:29:15.352325    1548 reflector.go:547] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-140056" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-140056' and this object
	W0723 14:30:36.669872 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.352392    1548 reflector.go:150] object-"kube-system"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-140056" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-140056' and this object
	I0723 14:30:36.711722 3324089 logs.go:123] Gathering logs for kube-apiserver [a958daba0b9ba0c8d9d5c5311f8098a5bc3c2438bfa54b6e90f10aa09e37fd91] ...
	I0723 14:30:36.711755 3324089 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a958daba0b9ba0c8d9d5c5311f8098a5bc3c2438bfa54b6e90f10aa09e37fd91"
	I0723 14:30:36.771692 3324089 logs.go:123] Gathering logs for coredns [0c63da520ba2b80db592eab81fc2f6f0721e424d20beb381aa34e4fc2e6cb76c] ...
	I0723 14:30:36.771733 3324089 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c63da520ba2b80db592eab81fc2f6f0721e424d20beb381aa34e4fc2e6cb76c"
	I0723 14:30:36.813680 3324089 logs.go:123] Gathering logs for kube-proxy [82396ebc6d4766b0d7b5d3fd339d8d061a527ae22c008086b632902c95850437] ...
	I0723 14:30:36.813710 3324089 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82396ebc6d4766b0d7b5d3fd339d8d061a527ae22c008086b632902c95850437"
	I0723 14:30:36.854363 3324089 logs.go:123] Gathering logs for CRI-O ...
	I0723 14:30:36.854390 3324089 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 14:30:36.946233 3324089 logs.go:123] Gathering logs for container status ...
	I0723 14:30:36.946273 3324089 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 14:30:37.016903 3324089 logs.go:123] Gathering logs for dmesg ...
	I0723 14:30:37.016956 3324089 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 14:30:37.041892 3324089 logs.go:123] Gathering logs for describe nodes ...
	I0723 14:30:37.041927 3324089 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 14:30:37.209083 3324089 out.go:304] Setting ErrFile to fd 2...
	I0723 14:30:37.209109 3324089 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0723 14:30:37.209265 3324089 out.go:239] X Problems detected in kubelet:
	W0723 14:30:37.209278 3324089 out.go:239]   Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.302324    1548 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-140056' and this object
	W0723 14:30:37.209286 3324089 out.go:239]   Jul 23 14:29:15 addons-140056 kubelet[1548]: W0723 14:29:15.302381    1548 reflector.go:547] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-140056" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-140056' and this object
	W0723 14:30:37.209328 3324089 out.go:239]   Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.302392    1548 reflector.go:150] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-140056" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-140056' and this object
	W0723 14:30:37.209359 3324089 out.go:239]   Jul 23 14:29:15 addons-140056 kubelet[1548]: W0723 14:29:15.352325    1548 reflector.go:547] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-140056" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-140056' and this object
	W0723 14:30:37.209366 3324089 out.go:239]   Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.352392    1548 reflector.go:150] object-"kube-system"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-140056" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-140056' and this object
	I0723 14:30:37.209378 3324089 out.go:304] Setting ErrFile to fd 2...
	I0723 14:30:37.209385 3324089 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 14:30:47.210735 3324089 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 14:30:47.228220 3324089 api_server.go:72] duration metric: took 2m18.237359054s to wait for apiserver process to appear ...
	I0723 14:30:47.228268 3324089 api_server.go:88] waiting for apiserver healthz status ...
	I0723 14:30:47.228332 3324089 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 14:30:47.228401 3324089 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 14:30:47.269505 3324089 cri.go:89] found id: "a958daba0b9ba0c8d9d5c5311f8098a5bc3c2438bfa54b6e90f10aa09e37fd91"
	I0723 14:30:47.269531 3324089 cri.go:89] found id: ""
	I0723 14:30:47.269539 3324089 logs.go:276] 1 containers: [a958daba0b9ba0c8d9d5c5311f8098a5bc3c2438bfa54b6e90f10aa09e37fd91]
	I0723 14:30:47.269624 3324089 ssh_runner.go:195] Run: which crictl
	I0723 14:30:47.273534 3324089 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 14:30:47.273605 3324089 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 14:30:47.315725 3324089 cri.go:89] found id: "137c42a93cc7c36e96ae7e6a68be283b89d997b0b6f8ea281cc17fcd3f3eb8c9"
	I0723 14:30:47.315745 3324089 cri.go:89] found id: ""
	I0723 14:30:47.315753 3324089 logs.go:276] 1 containers: [137c42a93cc7c36e96ae7e6a68be283b89d997b0b6f8ea281cc17fcd3f3eb8c9]
	I0723 14:30:47.315815 3324089 ssh_runner.go:195] Run: which crictl
	I0723 14:30:47.319753 3324089 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 14:30:47.319872 3324089 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 14:30:47.360343 3324089 cri.go:89] found id: "0c63da520ba2b80db592eab81fc2f6f0721e424d20beb381aa34e4fc2e6cb76c"
	I0723 14:30:47.360368 3324089 cri.go:89] found id: ""
	I0723 14:30:47.360376 3324089 logs.go:276] 1 containers: [0c63da520ba2b80db592eab81fc2f6f0721e424d20beb381aa34e4fc2e6cb76c]
	I0723 14:30:47.360440 3324089 ssh_runner.go:195] Run: which crictl
	I0723 14:30:47.364054 3324089 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 14:30:47.364182 3324089 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 14:30:47.404963 3324089 cri.go:89] found id: "54c3777af6f927748ddcbd45b75cad5d3d1a83ae959204c489cb9fd6611b442d"
	I0723 14:30:47.404986 3324089 cri.go:89] found id: ""
	I0723 14:30:47.404994 3324089 logs.go:276] 1 containers: [54c3777af6f927748ddcbd45b75cad5d3d1a83ae959204c489cb9fd6611b442d]
	I0723 14:30:47.405052 3324089 ssh_runner.go:195] Run: which crictl
	I0723 14:30:47.408542 3324089 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 14:30:47.408612 3324089 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 14:30:47.446412 3324089 cri.go:89] found id: "82396ebc6d4766b0d7b5d3fd339d8d061a527ae22c008086b632902c95850437"
	I0723 14:30:47.446438 3324089 cri.go:89] found id: ""
	I0723 14:30:47.446449 3324089 logs.go:276] 1 containers: [82396ebc6d4766b0d7b5d3fd339d8d061a527ae22c008086b632902c95850437]
	I0723 14:30:47.446520 3324089 ssh_runner.go:195] Run: which crictl
	I0723 14:30:47.450262 3324089 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 14:30:47.450340 3324089 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 14:30:47.488314 3324089 cri.go:89] found id: "a58f73816b73096e221fa39bf12174e422e029bcf2493236fd089866bc393967"
	I0723 14:30:47.488336 3324089 cri.go:89] found id: ""
	I0723 14:30:47.488344 3324089 logs.go:276] 1 containers: [a58f73816b73096e221fa39bf12174e422e029bcf2493236fd089866bc393967]
	I0723 14:30:47.488401 3324089 ssh_runner.go:195] Run: which crictl
	I0723 14:30:47.492066 3324089 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 14:30:47.492158 3324089 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 14:30:47.538826 3324089 cri.go:89] found id: "bdb361c9cd9a14e56cb6d358f6ab2176e483982c5537b445da9ad7f02031b57f"
	I0723 14:30:47.538846 3324089 cri.go:89] found id: ""
	I0723 14:30:47.538853 3324089 logs.go:276] 1 containers: [bdb361c9cd9a14e56cb6d358f6ab2176e483982c5537b445da9ad7f02031b57f]
	I0723 14:30:47.538912 3324089 ssh_runner.go:195] Run: which crictl
	I0723 14:30:47.543137 3324089 logs.go:123] Gathering logs for kube-apiserver [a958daba0b9ba0c8d9d5c5311f8098a5bc3c2438bfa54b6e90f10aa09e37fd91] ...
	I0723 14:30:47.543170 3324089 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a958daba0b9ba0c8d9d5c5311f8098a5bc3c2438bfa54b6e90f10aa09e37fd91"
	I0723 14:30:47.602911 3324089 logs.go:123] Gathering logs for etcd [137c42a93cc7c36e96ae7e6a68be283b89d997b0b6f8ea281cc17fcd3f3eb8c9] ...
	I0723 14:30:47.602946 3324089 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 137c42a93cc7c36e96ae7e6a68be283b89d997b0b6f8ea281cc17fcd3f3eb8c9"
	I0723 14:30:47.670819 3324089 logs.go:123] Gathering logs for kube-controller-manager [a58f73816b73096e221fa39bf12174e422e029bcf2493236fd089866bc393967] ...
	I0723 14:30:47.670853 3324089 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a58f73816b73096e221fa39bf12174e422e029bcf2493236fd089866bc393967"
	I0723 14:30:47.762514 3324089 logs.go:123] Gathering logs for kindnet [bdb361c9cd9a14e56cb6d358f6ab2176e483982c5537b445da9ad7f02031b57f] ...
	I0723 14:30:47.762603 3324089 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bdb361c9cd9a14e56cb6d358f6ab2176e483982c5537b445da9ad7f02031b57f"
	I0723 14:30:47.815271 3324089 logs.go:123] Gathering logs for CRI-O ...
	I0723 14:30:47.815306 3324089 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 14:30:47.919725 3324089 logs.go:123] Gathering logs for kubelet ...
	I0723 14:30:47.919804 3324089 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0723 14:30:47.961098 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: W0723 14:29:15.261241    1548 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-140056' and this object
	W0723 14:30:47.961343 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.261372    1548 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-140056' and this object
	W0723 14:30:47.963737 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: W0723 14:29:15.271322    1548 reflector.go:547] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-140056' and this object
	W0723 14:30:47.963956 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.271363    1548 reflector.go:150] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-140056' and this object
	W0723 14:30:47.964137 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: W0723 14:29:15.271626    1548 reflector.go:547] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-140056' and this object
	W0723 14:30:47.964336 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.271661    1548 reflector.go:150] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-140056' and this object
	W0723 14:30:47.964499 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: W0723 14:29:15.271627    1548 reflector.go:547] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-140056" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-140056' and this object
	W0723 14:30:47.964679 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.271686    1548 reflector.go:150] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-140056" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-140056' and this object
	W0723 14:30:47.966483 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: W0723 14:29:15.302024    1548 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-140056' and this object
	W0723 14:30:47.966704 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.302075    1548 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-140056' and this object
	W0723 14:30:47.966879 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: W0723 14:29:15.302140    1548 reflector.go:547] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-140056" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-140056' and this object
	W0723 14:30:47.967072 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.302153    1548 reflector.go:150] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-140056" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-140056' and this object
	W0723 14:30:47.967263 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: W0723 14:29:15.302194    1548 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-140056' and this object
	W0723 14:30:47.967470 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.302209    1548 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-140056' and this object
	W0723 14:30:47.967650 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: W0723 14:29:15.302254    1548 reflector.go:547] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-140056' and this object
	W0723 14:30:47.967849 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.302266    1548 reflector.go:150] object-"gcp-auth"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-140056' and this object
	W0723 14:30:47.968034 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: W0723 14:29:15.302313    1548 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-140056' and this object
	W0723 14:30:47.968246 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.302324    1548 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-140056' and this object
	W0723 14:30:47.968434 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: W0723 14:29:15.302381    1548 reflector.go:547] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-140056" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-140056' and this object
	W0723 14:30:47.968640 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.302392    1548 reflector.go:150] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-140056" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-140056' and this object
	W0723 14:30:47.968960 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: W0723 14:29:15.352325    1548 reflector.go:547] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-140056" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-140056' and this object
	W0723 14:30:47.969147 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.352392    1548 reflector.go:150] object-"kube-system"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-140056" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-140056' and this object
	I0723 14:30:48.012602 3324089 logs.go:123] Gathering logs for describe nodes ...
	I0723 14:30:48.012647 3324089 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 14:30:48.187330 3324089 logs.go:123] Gathering logs for coredns [0c63da520ba2b80db592eab81fc2f6f0721e424d20beb381aa34e4fc2e6cb76c] ...
	I0723 14:30:48.187358 3324089 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c63da520ba2b80db592eab81fc2f6f0721e424d20beb381aa34e4fc2e6cb76c"
	I0723 14:30:48.232203 3324089 logs.go:123] Gathering logs for kube-scheduler [54c3777af6f927748ddcbd45b75cad5d3d1a83ae959204c489cb9fd6611b442d] ...
	I0723 14:30:48.232237 3324089 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54c3777af6f927748ddcbd45b75cad5d3d1a83ae959204c489cb9fd6611b442d"
	I0723 14:30:48.284936 3324089 logs.go:123] Gathering logs for kube-proxy [82396ebc6d4766b0d7b5d3fd339d8d061a527ae22c008086b632902c95850437] ...
	I0723 14:30:48.284985 3324089 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82396ebc6d4766b0d7b5d3fd339d8d061a527ae22c008086b632902c95850437"
	I0723 14:30:48.322311 3324089 logs.go:123] Gathering logs for container status ...
	I0723 14:30:48.322340 3324089 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 14:30:48.374834 3324089 logs.go:123] Gathering logs for dmesg ...
	I0723 14:30:48.374863 3324089 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 14:30:48.393715 3324089 out.go:304] Setting ErrFile to fd 2...
	I0723 14:30:48.393738 3324089 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0723 14:30:48.393822 3324089 out.go:239] X Problems detected in kubelet:
	W0723 14:30:48.393838 3324089 out.go:239]   Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.302324    1548 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-140056' and this object
	W0723 14:30:48.393846 3324089 out.go:239]   Jul 23 14:29:15 addons-140056 kubelet[1548]: W0723 14:29:15.302381    1548 reflector.go:547] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-140056" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-140056' and this object
	W0723 14:30:48.393975 3324089 out.go:239]   Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.302392    1548 reflector.go:150] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-140056" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-140056' and this object
	W0723 14:30:48.393995 3324089 out.go:239]   Jul 23 14:29:15 addons-140056 kubelet[1548]: W0723 14:29:15.352325    1548 reflector.go:547] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-140056" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-140056' and this object
	W0723 14:30:48.394011 3324089 out.go:239]   Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.352392    1548 reflector.go:150] object-"kube-system"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-140056" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-140056' and this object
	I0723 14:30:48.394018 3324089 out.go:304] Setting ErrFile to fd 2...
	I0723 14:30:48.394027 3324089 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 14:30:58.395042 3324089 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0723 14:30:58.403031 3324089 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0723 14:30:58.404037 3324089 api_server.go:141] control plane version: v1.30.3
	I0723 14:30:58.404065 3324089 api_server.go:131] duration metric: took 11.17578447s to wait for apiserver health ...
	I0723 14:30:58.404075 3324089 system_pods.go:43] waiting for kube-system pods to appear ...
	I0723 14:30:58.404096 3324089 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 14:30:58.404166 3324089 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 14:30:58.460759 3324089 cri.go:89] found id: "a958daba0b9ba0c8d9d5c5311f8098a5bc3c2438bfa54b6e90f10aa09e37fd91"
	I0723 14:30:58.460780 3324089 cri.go:89] found id: ""
	I0723 14:30:58.460788 3324089 logs.go:276] 1 containers: [a958daba0b9ba0c8d9d5c5311f8098a5bc3c2438bfa54b6e90f10aa09e37fd91]
	I0723 14:30:58.460847 3324089 ssh_runner.go:195] Run: which crictl
	I0723 14:30:58.464276 3324089 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 14:30:58.464352 3324089 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 14:30:58.500803 3324089 cri.go:89] found id: "137c42a93cc7c36e96ae7e6a68be283b89d997b0b6f8ea281cc17fcd3f3eb8c9"
	I0723 14:30:58.500822 3324089 cri.go:89] found id: ""
	I0723 14:30:58.500831 3324089 logs.go:276] 1 containers: [137c42a93cc7c36e96ae7e6a68be283b89d997b0b6f8ea281cc17fcd3f3eb8c9]
	I0723 14:30:58.500886 3324089 ssh_runner.go:195] Run: which crictl
	I0723 14:30:58.504441 3324089 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 14:30:58.504514 3324089 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 14:30:58.541804 3324089 cri.go:89] found id: "0c63da520ba2b80db592eab81fc2f6f0721e424d20beb381aa34e4fc2e6cb76c"
	I0723 14:30:58.541823 3324089 cri.go:89] found id: ""
	I0723 14:30:58.541831 3324089 logs.go:276] 1 containers: [0c63da520ba2b80db592eab81fc2f6f0721e424d20beb381aa34e4fc2e6cb76c]
	I0723 14:30:58.541885 3324089 ssh_runner.go:195] Run: which crictl
	I0723 14:30:58.545534 3324089 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 14:30:58.545600 3324089 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 14:30:58.583913 3324089 cri.go:89] found id: "54c3777af6f927748ddcbd45b75cad5d3d1a83ae959204c489cb9fd6611b442d"
	I0723 14:30:58.583936 3324089 cri.go:89] found id: ""
	I0723 14:30:58.583944 3324089 logs.go:276] 1 containers: [54c3777af6f927748ddcbd45b75cad5d3d1a83ae959204c489cb9fd6611b442d]
	I0723 14:30:58.583999 3324089 ssh_runner.go:195] Run: which crictl
	I0723 14:30:58.588604 3324089 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 14:30:58.588675 3324089 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 14:30:58.626753 3324089 cri.go:89] found id: "82396ebc6d4766b0d7b5d3fd339d8d061a527ae22c008086b632902c95850437"
	I0723 14:30:58.626775 3324089 cri.go:89] found id: ""
	I0723 14:30:58.626783 3324089 logs.go:276] 1 containers: [82396ebc6d4766b0d7b5d3fd339d8d061a527ae22c008086b632902c95850437]
	I0723 14:30:58.626839 3324089 ssh_runner.go:195] Run: which crictl
	I0723 14:30:58.630452 3324089 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 14:30:58.630598 3324089 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 14:30:58.674868 3324089 cri.go:89] found id: "a58f73816b73096e221fa39bf12174e422e029bcf2493236fd089866bc393967"
	I0723 14:30:58.674888 3324089 cri.go:89] found id: ""
	I0723 14:30:58.674896 3324089 logs.go:276] 1 containers: [a58f73816b73096e221fa39bf12174e422e029bcf2493236fd089866bc393967]
	I0723 14:30:58.674959 3324089 ssh_runner.go:195] Run: which crictl
	I0723 14:30:58.678606 3324089 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 14:30:58.678689 3324089 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 14:30:58.717849 3324089 cri.go:89] found id: "bdb361c9cd9a14e56cb6d358f6ab2176e483982c5537b445da9ad7f02031b57f"
	I0723 14:30:58.717874 3324089 cri.go:89] found id: ""
	I0723 14:30:58.717882 3324089 logs.go:276] 1 containers: [bdb361c9cd9a14e56cb6d358f6ab2176e483982c5537b445da9ad7f02031b57f]
	I0723 14:30:58.717937 3324089 ssh_runner.go:195] Run: which crictl
	I0723 14:30:58.721890 3324089 logs.go:123] Gathering logs for kubelet ...
	I0723 14:30:58.721918 3324089 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0723 14:30:58.764391 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: W0723 14:29:15.261241    1548 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-140056' and this object
	W0723 14:30:58.764630 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.261372    1548 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-140056' and this object
	W0723 14:30:58.766939 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: W0723 14:29:15.271322    1548 reflector.go:547] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-140056' and this object
	W0723 14:30:58.767152 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.271363    1548 reflector.go:150] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-140056' and this object
	W0723 14:30:58.767333 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: W0723 14:29:15.271626    1548 reflector.go:547] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-140056' and this object
	W0723 14:30:58.767533 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.271661    1548 reflector.go:150] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-140056' and this object
	W0723 14:30:58.767697 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: W0723 14:29:15.271627    1548 reflector.go:547] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-140056" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-140056' and this object
	W0723 14:30:58.767879 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.271686    1548 reflector.go:150] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-140056" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-140056' and this object
	W0723 14:30:58.769636 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: W0723 14:29:15.302024    1548 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-140056' and this object
	W0723 14:30:58.769845 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.302075    1548 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-140056' and this object
	W0723 14:30:58.770018 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: W0723 14:29:15.302140    1548 reflector.go:547] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-140056" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-140056' and this object
	W0723 14:30:58.770210 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.302153    1548 reflector.go:150] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-140056" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-140056' and this object
	W0723 14:30:58.770396 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: W0723 14:29:15.302194    1548 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-140056' and this object
	W0723 14:30:58.770612 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.302209    1548 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-140056' and this object
	W0723 14:30:58.770791 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: W0723 14:29:15.302254    1548 reflector.go:547] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-140056' and this object
	W0723 14:30:58.770991 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.302266    1548 reflector.go:150] object-"gcp-auth"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-140056' and this object
	W0723 14:30:58.771184 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: W0723 14:29:15.302313    1548 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-140056' and this object
	W0723 14:30:58.771392 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.302324    1548 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-140056' and this object
	W0723 14:30:58.771579 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: W0723 14:29:15.302381    1548 reflector.go:547] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-140056" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-140056' and this object
	W0723 14:30:58.771785 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.302392    1548 reflector.go:150] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-140056" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-140056' and this object
	W0723 14:30:58.772106 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: W0723 14:29:15.352325    1548 reflector.go:547] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-140056" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-140056' and this object
	W0723 14:30:58.772290 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.352392    1548 reflector.go:150] object-"kube-system"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-140056" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-140056' and this object
	I0723 14:30:58.815554 3324089 logs.go:123] Gathering logs for describe nodes ...
	I0723 14:30:58.815580 3324089 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 14:30:58.942348 3324089 logs.go:123] Gathering logs for coredns [0c63da520ba2b80db592eab81fc2f6f0721e424d20beb381aa34e4fc2e6cb76c] ...
	I0723 14:30:58.942458 3324089 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c63da520ba2b80db592eab81fc2f6f0721e424d20beb381aa34e4fc2e6cb76c"
	I0723 14:30:58.984159 3324089 logs.go:123] Gathering logs for kube-scheduler [54c3777af6f927748ddcbd45b75cad5d3d1a83ae959204c489cb9fd6611b442d] ...
	I0723 14:30:58.984196 3324089 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54c3777af6f927748ddcbd45b75cad5d3d1a83ae959204c489cb9fd6611b442d"
	I0723 14:30:59.037940 3324089 logs.go:123] Gathering logs for kube-proxy [82396ebc6d4766b0d7b5d3fd339d8d061a527ae22c008086b632902c95850437] ...
	I0723 14:30:59.037973 3324089 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82396ebc6d4766b0d7b5d3fd339d8d061a527ae22c008086b632902c95850437"
	I0723 14:30:59.078449 3324089 logs.go:123] Gathering logs for kube-controller-manager [a58f73816b73096e221fa39bf12174e422e029bcf2493236fd089866bc393967] ...
	I0723 14:30:59.078478 3324089 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a58f73816b73096e221fa39bf12174e422e029bcf2493236fd089866bc393967"
	I0723 14:30:59.175340 3324089 logs.go:123] Gathering logs for kindnet [bdb361c9cd9a14e56cb6d358f6ab2176e483982c5537b445da9ad7f02031b57f] ...
	I0723 14:30:59.175379 3324089 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bdb361c9cd9a14e56cb6d358f6ab2176e483982c5537b445da9ad7f02031b57f"
	I0723 14:30:59.223478 3324089 logs.go:123] Gathering logs for container status ...
	I0723 14:30:59.223511 3324089 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 14:30:59.289254 3324089 logs.go:123] Gathering logs for dmesg ...
	I0723 14:30:59.289285 3324089 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 14:30:59.308508 3324089 logs.go:123] Gathering logs for kube-apiserver [a958daba0b9ba0c8d9d5c5311f8098a5bc3c2438bfa54b6e90f10aa09e37fd91] ...
	I0723 14:30:59.308550 3324089 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a958daba0b9ba0c8d9d5c5311f8098a5bc3c2438bfa54b6e90f10aa09e37fd91"
	I0723 14:30:59.394637 3324089 logs.go:123] Gathering logs for etcd [137c42a93cc7c36e96ae7e6a68be283b89d997b0b6f8ea281cc17fcd3f3eb8c9] ...
	I0723 14:30:59.394670 3324089 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 137c42a93cc7c36e96ae7e6a68be283b89d997b0b6f8ea281cc17fcd3f3eb8c9"
	I0723 14:30:59.463095 3324089 logs.go:123] Gathering logs for CRI-O ...
	I0723 14:30:59.463129 3324089 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 14:30:59.571293 3324089 out.go:304] Setting ErrFile to fd 2...
	I0723 14:30:59.571354 3324089 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0723 14:30:59.571436 3324089 out.go:239] X Problems detected in kubelet:
	W0723 14:30:59.571580 3324089 out.go:239]   Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.302324    1548 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-140056' and this object
	W0723 14:30:59.571594 3324089 out.go:239]   Jul 23 14:29:15 addons-140056 kubelet[1548]: W0723 14:29:15.302381    1548 reflector.go:547] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-140056" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-140056' and this object
	W0723 14:30:59.571624 3324089 out.go:239]   Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.302392    1548 reflector.go:150] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-140056" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-140056' and this object
	W0723 14:30:59.571637 3324089 out.go:239]   Jul 23 14:29:15 addons-140056 kubelet[1548]: W0723 14:29:15.352325    1548 reflector.go:547] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-140056" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-140056' and this object
	W0723 14:30:59.571647 3324089 out.go:239]   Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.352392    1548 reflector.go:150] object-"kube-system"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-140056" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-140056' and this object
	I0723 14:30:59.571653 3324089 out.go:304] Setting ErrFile to fd 2...
	I0723 14:30:59.571660 3324089 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 14:31:09.583936 3324089 system_pods.go:59] 18 kube-system pods found
	I0723 14:31:09.583990 3324089 system_pods.go:61] "coredns-7db6d8ff4d-jgz96" [3ec14c0f-c68d-4fd5-8582-5459477e40f5] Running
	I0723 14:31:09.583997 3324089 system_pods.go:61] "csi-hostpath-attacher-0" [3168c014-5a39-4ad9-bca0-efd7be769099] Running
	I0723 14:31:09.584002 3324089 system_pods.go:61] "csi-hostpath-resizer-0" [6cd26b68-041c-4848-8141-53baaab748f2] Running
	I0723 14:31:09.584007 3324089 system_pods.go:61] "csi-hostpathplugin-s9wmq" [271af35f-c33e-4782-ac08-d1c6e905f4b9] Running
	I0723 14:31:09.584011 3324089 system_pods.go:61] "etcd-addons-140056" [a774b4b3-a8ab-4841-b224-b8ae6f3ca338] Running
	I0723 14:31:09.584015 3324089 system_pods.go:61] "kindnet-2f7s4" [b028186c-e060-45cd-b380-c68f5957f6e8] Running
	I0723 14:31:09.584019 3324089 system_pods.go:61] "kube-apiserver-addons-140056" [e00d998c-7953-483b-b95e-44629436c611] Running
	I0723 14:31:09.584023 3324089 system_pods.go:61] "kube-controller-manager-addons-140056" [4a59bc12-47b7-4b80-8799-2297b8a54676] Running
	I0723 14:31:09.584028 3324089 system_pods.go:61] "kube-ingress-dns-minikube" [f19d23b6-9b9b-4771-aeaf-40a41665b578] Running
	I0723 14:31:09.584033 3324089 system_pods.go:61] "kube-proxy-qch7m" [ae8a5d47-ee7a-4d28-a940-13c073ba54b1] Running
	I0723 14:31:09.584037 3324089 system_pods.go:61] "kube-scheduler-addons-140056" [4b4bba11-865d-4a9d-97d1-5c4c0c60db06] Running
	I0723 14:31:09.584042 3324089 system_pods.go:61] "metrics-server-c59844bb4-ql9z2" [624cee58-45f6-4199-bfae-0fb883077e3f] Running
	I0723 14:31:09.584053 3324089 system_pods.go:61] "nvidia-device-plugin-daemonset-rhfcp" [724260a7-4c1d-4daf-a392-8f7cf7efaa06] Running
	I0723 14:31:09.584063 3324089 system_pods.go:61] "registry-656c9c8d9c-pjd4j" [1859702d-c9a6-460d-81c6-102ef98b706b] Running
	I0723 14:31:09.584067 3324089 system_pods.go:61] "registry-proxy-g8j86" [9477b7ff-d5fd-48f9-ad75-25e57440ab34] Running
	I0723 14:31:09.584071 3324089 system_pods.go:61] "snapshot-controller-745499f584-8fqv4" [3b243cc0-c3dc-4dab-975e-450249ec2899] Running
	I0723 14:31:09.584074 3324089 system_pods.go:61] "snapshot-controller-745499f584-drrj2" [b0844d73-10ab-444a-9a27-9c7b26a76450] Running
	I0723 14:31:09.584078 3324089 system_pods.go:61] "storage-provisioner" [ba9d48df-c1eb-455d-973a-5a8b814e6290] Running
	I0723 14:31:09.584084 3324089 system_pods.go:74] duration metric: took 11.180002701s to wait for pod list to return data ...
	I0723 14:31:09.584095 3324089 default_sa.go:34] waiting for default service account to be created ...
	I0723 14:31:09.586634 3324089 default_sa.go:45] found service account: "default"
	I0723 14:31:09.586662 3324089 default_sa.go:55] duration metric: took 2.558809ms for default service account to be created ...
	I0723 14:31:09.586673 3324089 system_pods.go:116] waiting for k8s-apps to be running ...
	I0723 14:31:09.595962 3324089 system_pods.go:86] 18 kube-system pods found
	I0723 14:31:09.596004 3324089 system_pods.go:89] "coredns-7db6d8ff4d-jgz96" [3ec14c0f-c68d-4fd5-8582-5459477e40f5] Running
	I0723 14:31:09.596011 3324089 system_pods.go:89] "csi-hostpath-attacher-0" [3168c014-5a39-4ad9-bca0-efd7be769099] Running
	I0723 14:31:09.596015 3324089 system_pods.go:89] "csi-hostpath-resizer-0" [6cd26b68-041c-4848-8141-53baaab748f2] Running
	I0723 14:31:09.596020 3324089 system_pods.go:89] "csi-hostpathplugin-s9wmq" [271af35f-c33e-4782-ac08-d1c6e905f4b9] Running
	I0723 14:31:09.596024 3324089 system_pods.go:89] "etcd-addons-140056" [a774b4b3-a8ab-4841-b224-b8ae6f3ca338] Running
	I0723 14:31:09.596029 3324089 system_pods.go:89] "kindnet-2f7s4" [b028186c-e060-45cd-b380-c68f5957f6e8] Running
	I0723 14:31:09.596033 3324089 system_pods.go:89] "kube-apiserver-addons-140056" [e00d998c-7953-483b-b95e-44629436c611] Running
	I0723 14:31:09.596037 3324089 system_pods.go:89] "kube-controller-manager-addons-140056" [4a59bc12-47b7-4b80-8799-2297b8a54676] Running
	I0723 14:31:09.596041 3324089 system_pods.go:89] "kube-ingress-dns-minikube" [f19d23b6-9b9b-4771-aeaf-40a41665b578] Running
	I0723 14:31:09.596045 3324089 system_pods.go:89] "kube-proxy-qch7m" [ae8a5d47-ee7a-4d28-a940-13c073ba54b1] Running
	I0723 14:31:09.596049 3324089 system_pods.go:89] "kube-scheduler-addons-140056" [4b4bba11-865d-4a9d-97d1-5c4c0c60db06] Running
	I0723 14:31:09.596053 3324089 system_pods.go:89] "metrics-server-c59844bb4-ql9z2" [624cee58-45f6-4199-bfae-0fb883077e3f] Running
	I0723 14:31:09.596057 3324089 system_pods.go:89] "nvidia-device-plugin-daemonset-rhfcp" [724260a7-4c1d-4daf-a392-8f7cf7efaa06] Running
	I0723 14:31:09.596061 3324089 system_pods.go:89] "registry-656c9c8d9c-pjd4j" [1859702d-c9a6-460d-81c6-102ef98b706b] Running
	I0723 14:31:09.596065 3324089 system_pods.go:89] "registry-proxy-g8j86" [9477b7ff-d5fd-48f9-ad75-25e57440ab34] Running
	I0723 14:31:09.596070 3324089 system_pods.go:89] "snapshot-controller-745499f584-8fqv4" [3b243cc0-c3dc-4dab-975e-450249ec2899] Running
	I0723 14:31:09.596079 3324089 system_pods.go:89] "snapshot-controller-745499f584-drrj2" [b0844d73-10ab-444a-9a27-9c7b26a76450] Running
	I0723 14:31:09.596084 3324089 system_pods.go:89] "storage-provisioner" [ba9d48df-c1eb-455d-973a-5a8b814e6290] Running
	I0723 14:31:09.596094 3324089 system_pods.go:126] duration metric: took 9.415409ms to wait for k8s-apps to be running ...
	I0723 14:31:09.596518 3324089 system_svc.go:44] waiting for kubelet service to be running ....
	I0723 14:31:09.596598 3324089 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 14:31:09.609164 3324089 system_svc.go:56] duration metric: took 13.051218ms WaitForService to wait for kubelet
	I0723 14:31:09.609198 3324089 kubeadm.go:582] duration metric: took 2m40.618344346s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0723 14:31:09.609223 3324089 node_conditions.go:102] verifying NodePressure condition ...
	I0723 14:31:09.613035 3324089 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0723 14:31:09.613068 3324089 node_conditions.go:123] node cpu capacity is 2
	I0723 14:31:09.613079 3324089 node_conditions.go:105] duration metric: took 3.851091ms to run NodePressure ...
	I0723 14:31:09.613092 3324089 start.go:241] waiting for startup goroutines ...
	I0723 14:31:09.613099 3324089 start.go:246] waiting for cluster config update ...
	I0723 14:31:09.613118 3324089 start.go:255] writing updated cluster config ...
	I0723 14:31:09.613400 3324089 ssh_runner.go:195] Run: rm -f paused
	I0723 14:31:09.957052 3324089 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0723 14:31:09.959226 3324089 out.go:177] * Done! kubectl is now configured to use "addons-140056" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 23 14:34:53 addons-140056 crio[967]: time="2024-07-23 14:34:53.955117681Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b],Size_:4789170,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=755b3587-da10-4ce8-80a9-14b79d54ba58 name=/runtime.v1.ImageService/ImageStatus
	Jul 23 14:34:53 addons-140056 crio[967]: time="2024-07-23 14:34:53.956997884Z" level=info msg="Creating container: default/hello-world-app-6778b5fc9f-4zn9v/hello-world-app" id=7c2235b2-dc90-43ea-b7c5-a61acff97607 name=/runtime.v1.RuntimeService/CreateContainer
	Jul 23 14:34:53 addons-140056 crio[967]: time="2024-07-23 14:34:53.957106029Z" level=warning msg="Allowed annotations are specified for workload []"
	Jul 23 14:34:53 addons-140056 crio[967]: time="2024-07-23 14:34:53.976913430Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/5f1807317155819b69a13104ac07f0c6cbed07d7e28d732eec37c074e8cf0ed3/merged/etc/passwd: no such file or directory"
	Jul 23 14:34:53 addons-140056 crio[967]: time="2024-07-23 14:34:53.976965714Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/5f1807317155819b69a13104ac07f0c6cbed07d7e28d732eec37c074e8cf0ed3/merged/etc/group: no such file or directory"
	Jul 23 14:34:54 addons-140056 crio[967]: time="2024-07-23 14:34:54.020886241Z" level=info msg="Created container 2deccc7daf59e8c1ea5c2148b35d10c3cf48bcc49ed7753d5549c5713ad1f498: default/hello-world-app-6778b5fc9f-4zn9v/hello-world-app" id=7c2235b2-dc90-43ea-b7c5-a61acff97607 name=/runtime.v1.RuntimeService/CreateContainer
	Jul 23 14:34:54 addons-140056 crio[967]: time="2024-07-23 14:34:54.021891849Z" level=info msg="Starting container: 2deccc7daf59e8c1ea5c2148b35d10c3cf48bcc49ed7753d5549c5713ad1f498" id=45eb8ce9-623d-485d-9b54-9f16e47bac38 name=/runtime.v1.RuntimeService/StartContainer
	Jul 23 14:34:54 addons-140056 crio[967]: time="2024-07-23 14:34:54.028070278Z" level=info msg="Started container" PID=8270 containerID=2deccc7daf59e8c1ea5c2148b35d10c3cf48bcc49ed7753d5549c5713ad1f498 description=default/hello-world-app-6778b5fc9f-4zn9v/hello-world-app id=45eb8ce9-623d-485d-9b54-9f16e47bac38 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f5c6f7d5071cde853ab50760688e0fc282b8ca52158f982fbf9a70f38884278f
	Jul 23 14:34:54 addons-140056 crio[967]: time="2024-07-23 14:34:54.082937178Z" level=info msg="Removing container: 776d4e5af28b66f3115339c3aea556a2c8afc95296d968a80f950f41013e30da" id=04104458-006e-49ff-9d65-f490c59ec7db name=/runtime.v1.RuntimeService/RemoveContainer
	Jul 23 14:34:54 addons-140056 crio[967]: time="2024-07-23 14:34:54.109252842Z" level=info msg="Removed container 776d4e5af28b66f3115339c3aea556a2c8afc95296d968a80f950f41013e30da: kube-system/kube-ingress-dns-minikube/minikube-ingress-dns" id=04104458-006e-49ff-9d65-f490c59ec7db name=/runtime.v1.RuntimeService/RemoveContainer
	Jul 23 14:34:55 addons-140056 crio[967]: time="2024-07-23 14:34:55.814136868Z" level=info msg="Stopping container: 2ab9cb1b6a6e93dfe4f80d4c8da60bf7b3fcd755d45285bfc12df244cf9e0c03 (timeout: 2s)" id=ce62bcda-6797-4523-ad3c-7c47fd077cad name=/runtime.v1.RuntimeService/StopContainer
	Jul 23 14:34:57 addons-140056 crio[967]: time="2024-07-23 14:34:57.820462325Z" level=warning msg="Stopping container 2ab9cb1b6a6e93dfe4f80d4c8da60bf7b3fcd755d45285bfc12df244cf9e0c03 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=ce62bcda-6797-4523-ad3c-7c47fd077cad name=/runtime.v1.RuntimeService/StopContainer
	Jul 23 14:34:57 addons-140056 conmon[4994]: conmon 2ab9cb1b6a6e93dfe4f8 <ninfo>: container 5006 exited with status 137
	Jul 23 14:34:57 addons-140056 crio[967]: time="2024-07-23 14:34:57.956774092Z" level=info msg="Stopped container 2ab9cb1b6a6e93dfe4f80d4c8da60bf7b3fcd755d45285bfc12df244cf9e0c03: ingress-nginx/ingress-nginx-controller-6d9bd977d4-tsvlg/controller" id=ce62bcda-6797-4523-ad3c-7c47fd077cad name=/runtime.v1.RuntimeService/StopContainer
	Jul 23 14:34:57 addons-140056 crio[967]: time="2024-07-23 14:34:57.957378456Z" level=info msg="Stopping pod sandbox: 646d31216b1f841d4e240f3ec083a827d3752d312f54dcda1ba5a6accedccb8a" id=bb24e7e6-bd92-441b-ac21-041ca3760e12 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 23 14:34:57 addons-140056 crio[967]: time="2024-07-23 14:34:57.960997306Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-CLJCURUXX3VTT2UL - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-J7DQYTOCBS5DZSZ3 - [0:0]\n-X KUBE-HP-CLJCURUXX3VTT2UL\n-X KUBE-HP-J7DQYTOCBS5DZSZ3\nCOMMIT\n"
	Jul 23 14:34:57 addons-140056 crio[967]: time="2024-07-23 14:34:57.965529233Z" level=info msg="Closing host port tcp:80"
	Jul 23 14:34:57 addons-140056 crio[967]: time="2024-07-23 14:34:57.965586932Z" level=info msg="Closing host port tcp:443"
	Jul 23 14:34:57 addons-140056 crio[967]: time="2024-07-23 14:34:57.967005657Z" level=info msg="Host port tcp:80 does not have an open socket"
	Jul 23 14:34:57 addons-140056 crio[967]: time="2024-07-23 14:34:57.967033251Z" level=info msg="Host port tcp:443 does not have an open socket"
	Jul 23 14:34:57 addons-140056 crio[967]: time="2024-07-23 14:34:57.967213282Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-6d9bd977d4-tsvlg Namespace:ingress-nginx ID:646d31216b1f841d4e240f3ec083a827d3752d312f54dcda1ba5a6accedccb8a UID:a96d060b-4b9d-411e-9243-066225274171 NetNS:/var/run/netns/1841f125-9b8a-4ff9-8753-2388dc992bc1 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jul 23 14:34:57 addons-140056 crio[967]: time="2024-07-23 14:34:57.967358629Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-6d9bd977d4-tsvlg from CNI network \"kindnet\" (type=ptp)"
	Jul 23 14:34:57 addons-140056 crio[967]: time="2024-07-23 14:34:57.980367918Z" level=info msg="Stopped pod sandbox: 646d31216b1f841d4e240f3ec083a827d3752d312f54dcda1ba5a6accedccb8a" id=bb24e7e6-bd92-441b-ac21-041ca3760e12 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 23 14:34:58 addons-140056 crio[967]: time="2024-07-23 14:34:58.091702515Z" level=info msg="Removing container: 2ab9cb1b6a6e93dfe4f80d4c8da60bf7b3fcd755d45285bfc12df244cf9e0c03" id=e7a3bcdc-b1bc-4119-8dd6-a579edea7e36 name=/runtime.v1.RuntimeService/RemoveContainer
	Jul 23 14:34:58 addons-140056 crio[967]: time="2024-07-23 14:34:58.105851805Z" level=info msg="Removed container 2ab9cb1b6a6e93dfe4f80d4c8da60bf7b3fcd755d45285bfc12df244cf9e0c03: ingress-nginx/ingress-nginx-controller-6d9bd977d4-tsvlg/controller" id=e7a3bcdc-b1bc-4119-8dd6-a579edea7e36 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2deccc7daf59e       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        9 seconds ago       Running             hello-world-app           0                   f5c6f7d5071cd       hello-world-app-6778b5fc9f-4zn9v
	4755e5aeed108       docker.io/library/nginx@sha256:1e67a3c8607fe555f47dc8a72f25424b10273639136c061c508628da3112f90e                              2 minutes ago       Running             nginx                     0                   142f182854052       nginx
	651c584db6839       ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37                        3 minutes ago       Running             headlamp                  0                   871491cc55c8d       headlamp-7867546754-xc2vd
	6bdf0ac15bdda       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:a40e1a121ee367d1712ac3a54ec9c38c405a65dde923c98e5fa6368fa82c4b69                 4 minutes ago       Running             gcp-auth                  0                   f3a5848d068ae       gcp-auth-5db96cd9b4-b42k7
	87c4ce0f65f7b       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                              5 minutes ago       Running             yakd                      0                   9d9ce7531b9a3       yakd-dashboard-799879c74f-jkkhq
	14bde39b35915       296b5f799fcd8a39f0e93373bc18787d846c6a2a78a5657b1514831f043c09bf                                                             5 minutes ago       Exited              patch                     1                   fff5df7e6f10d       ingress-nginx-admission-patch-snt8v
	9bf4698e1ef67       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:36d05b4077fb8e3d13663702fa337f124675ba8667cbd949c03a8e8ea6fa4366   5 minutes ago       Exited              create                    0                   52b6e1e3a5c12       ingress-nginx-admission-create-hhs2n
	3019c06c5c171       registry.k8s.io/metrics-server/metrics-server@sha256:7f0fc3565b6d4655d078bb8e250d0423d7c79aeb05fbc71e1ffa6ff664264d70        5 minutes ago       Running             metrics-server            0                   3a4ce44ed17ac       metrics-server-c59844bb4-ql9z2
	0c63da520ba2b       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93                                                             5 minutes ago       Running             coredns                   0                   7a0819fadd8d8       coredns-7db6d8ff4d-jgz96
	d7535c8a235c4       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             5 minutes ago       Running             storage-provisioner       0                   c864a0ebdcd8f       storage-provisioner
	bdb361c9cd9a1       docker.io/kindest/kindnetd@sha256:14100a3a7aca6cad3de3f26ee342ad937ca7d2844b1983d3baa7bf5f491baa7a                           6 minutes ago       Running             kindnet-cni               0                   b671b9e3303ab       kindnet-2f7s4
	82396ebc6d476       2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be                                                             6 minutes ago       Running             kube-proxy                0                   e8ab433d9f148       kube-proxy-qch7m
	137c42a93cc7c       014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd                                                             6 minutes ago       Running             etcd                      0                   43cc279c25a25       etcd-addons-140056
	a58f73816b730       8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a                                                             6 minutes ago       Running             kube-controller-manager   0                   160ad4908df22       kube-controller-manager-addons-140056
	a958daba0b9ba       61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca                                                             6 minutes ago       Running             kube-apiserver            0                   914614ae397d3       kube-apiserver-addons-140056
	54c3777af6f92       d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355                                                             6 minutes ago       Running             kube-scheduler            0                   ceb6154379673       kube-scheduler-addons-140056
	
	
	==> coredns [0c63da520ba2b80db592eab81fc2f6f0721e424d20beb381aa34e4fc2e6cb76c] <==
	[INFO] 10.244.0.14:45959 - 6077 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002961381s
	[INFO] 10.244.0.14:44651 - 14282 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00012755s
	[INFO] 10.244.0.14:44651 - 7112 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000158509s
	[INFO] 10.244.0.14:32895 - 14730 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000134467s
	[INFO] 10.244.0.14:32895 - 31351 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000055927s
	[INFO] 10.244.0.14:48991 - 38530 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000059989s
	[INFO] 10.244.0.14:48991 - 43648 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000050052s
	[INFO] 10.244.0.14:55309 - 3559 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000075529s
	[INFO] 10.244.0.14:55309 - 14565 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000055508s
	[INFO] 10.244.0.14:56101 - 7995 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001773183s
	[INFO] 10.244.0.14:56101 - 581 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.007259772s
	[INFO] 10.244.0.14:40495 - 27364 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000058914s
	[INFO] 10.244.0.14:40495 - 47328 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.0001009s
	[INFO] 10.244.0.19:42441 - 16961 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000158238s
	[INFO] 10.244.0.19:60458 - 16730 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000225135s
	[INFO] 10.244.0.19:34564 - 27988 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000101819s
	[INFO] 10.244.0.19:60295 - 6013 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000090684s
	[INFO] 10.244.0.19:49181 - 41653 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00008764s
	[INFO] 10.244.0.19:55294 - 3538 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00005957s
	[INFO] 10.244.0.19:56344 - 158 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.007564259s
	[INFO] 10.244.0.19:55202 - 25875 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.007938049s
	[INFO] 10.244.0.19:58488 - 61562 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000924779s
	[INFO] 10.244.0.19:53801 - 64116 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.001664144s
	[INFO] 10.244.0.22:40783 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000501914s
	[INFO] 10.244.0.22:45144 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000138693s
	
	
	==> describe nodes <==
	Name:               addons-140056
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-140056
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8c09ef4366b737aec8aaa2bbc590bde27da814a6
	                    minikube.k8s.io/name=addons-140056
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_23T14_28_16_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-140056
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 23 Jul 2024 14:28:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-140056
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 23 Jul 2024 14:34:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 23 Jul 2024 14:32:50 +0000   Tue, 23 Jul 2024 14:28:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 23 Jul 2024 14:32:50 +0000   Tue, 23 Jul 2024 14:28:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 23 Jul 2024 14:32:50 +0000   Tue, 23 Jul 2024 14:28:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 23 Jul 2024 14:32:50 +0000   Tue, 23 Jul 2024 14:29:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-140056
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	System Info:
	  Machine ID:                 4d73b747a75c4370b5d2c406795a0045
	  System UUID:                2cc61e4c-48a4-4fb9-b435-38f736e4329b
	  Boot ID:                    95e04985-bf92-47a1-9b5b-7f09371b9e30
	  Kernel Version:             5.15.0-1065-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-6778b5fc9f-4zn9v         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m31s
	  gcp-auth                    gcp-auth-5db96cd9b4-b42k7                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m23s
	  headlamp                    headlamp-7867546754-xc2vd                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m53s
	  kube-system                 coredns-7db6d8ff4d-jgz96                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     6m34s
	  kube-system                 etcd-addons-140056                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         6m48s
	  kube-system                 kindnet-2f7s4                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      6m35s
	  kube-system                 kube-apiserver-addons-140056             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m48s
	  kube-system                 kube-controller-manager-addons-140056    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m49s
	  kube-system                 kube-proxy-qch7m                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m35s
	  kube-system                 kube-scheduler-addons-140056             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m48s
	  kube-system                 metrics-server-c59844bb4-ql9z2           100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (2%!)(MISSING)       0 (0%!)(MISSING)         6m29s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m29s
	  yakd-dashboard              yakd-dashboard-799879c74f-jkkhq          0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (1%!)(MISSING)       256Mi (3%!)(MISSING)     6m29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%!)(MISSING)  100m (5%!)(MISSING)
	  memory             548Mi (6%!)(MISSING)  476Mi (6%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m28s  kube-proxy       
	  Normal  Starting                 6m48s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m48s  kubelet          Node addons-140056 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m48s  kubelet          Node addons-140056 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m48s  kubelet          Node addons-140056 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m35s  node-controller  Node addons-140056 event: Registered Node addons-140056 in Controller
	  Normal  NodeReady                5m48s  kubelet          Node addons-140056 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.001070] FS-Cache: O-key=[8] '2d713b0000000000'
	[  +0.000720] FS-Cache: N-cookie c=000000d2 [p=000000c9 fl=2 nc=0 na=1]
	[  +0.000975] FS-Cache: N-cookie d=00000000a817b499{9p.inode} n=000000000709a92e
	[  +0.001110] FS-Cache: N-key=[8] '2d713b0000000000'
	[  +0.008114] FS-Cache: Duplicate cookie detected
	[  +0.000738] FS-Cache: O-cookie c=000000cc [p=000000c9 fl=226 nc=0 na=1]
	[  +0.001007] FS-Cache: O-cookie d=00000000a817b499{9p.inode} n=0000000092f01866
	[  +0.001106] FS-Cache: O-key=[8] '2d713b0000000000'
	[  +0.000742] FS-Cache: N-cookie c=000000d3 [p=000000c9 fl=2 nc=0 na=1]
	[  +0.001059] FS-Cache: N-cookie d=00000000a817b499{9p.inode} n=00000000655937a6
	[  +0.001197] FS-Cache: N-key=[8] '2d713b0000000000'
	[  +2.882746] FS-Cache: Duplicate cookie detected
	[  +0.000750] FS-Cache: O-cookie c=000000ca [p=000000c9 fl=226 nc=0 na=1]
	[  +0.001045] FS-Cache: O-cookie d=00000000a817b499{9p.inode} n=000000008eb2f51f
	[  +0.001080] FS-Cache: O-key=[8] '2c713b0000000000'
	[  +0.000745] FS-Cache: N-cookie c=000000d5 [p=000000c9 fl=2 nc=0 na=1]
	[  +0.000961] FS-Cache: N-cookie d=00000000a817b499{9p.inode} n=000000008f7cdf75
	[  +0.001066] FS-Cache: N-key=[8] '2c713b0000000000'
	[  +0.323741] FS-Cache: Duplicate cookie detected
	[  +0.000718] FS-Cache: O-cookie c=000000cf [p=000000c9 fl=226 nc=0 na=1]
	[  +0.001294] FS-Cache: O-cookie d=00000000a817b499{9p.inode} n=00000000897df759
	[  +0.001091] FS-Cache: O-key=[8] '32713b0000000000'
	[  +0.000726] FS-Cache: N-cookie c=000000d6 [p=000000c9 fl=2 nc=0 na=1]
	[  +0.000957] FS-Cache: N-cookie d=00000000a817b499{9p.inode} n=000000001ce9a292
	[  +0.001092] FS-Cache: N-key=[8] '32713b0000000000'
	
	
	==> etcd [137c42a93cc7c36e96ae7e6a68be283b89d997b0b6f8ea281cc17fcd3f3eb8c9] <==
	{"level":"warn","ts":"2024-07-23T14:28:33.517577Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-23T14:28:33.195039Z","time spent":"322.533942ms","remote":"127.0.0.1:45928","response type":"/etcdserverpb.KV/Range","request count":0,"request size":51,"response count":0,"response size":29,"request content":"key:\"/registry/clusterrolebindings/storage-provisioner\" "}
	{"level":"warn","ts":"2024-07-23T14:28:33.517691Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"322.687511ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-140056\" ","response":"range_response_count:1 size:5744"}
	{"level":"info","ts":"2024-07-23T14:28:33.517716Z","caller":"traceutil/trace.go:171","msg":"trace[1035957126] range","detail":"{range_begin:/registry/minions/addons-140056; range_end:; response_count:1; response_revision:378; }","duration":"322.711979ms","start":"2024-07-23T14:28:33.194997Z","end":"2024-07-23T14:28:33.517709Z","steps":["trace[1035957126] 'agreement among raft nodes before linearized reading'  (duration: 231.297148ms)","trace[1035957126] 'get authentication metadata'  (duration: 40.519344ms)","trace[1035957126] 'range keys from in-memory index tree'  (duration: 50.854018ms)"],"step_count":3}
	{"level":"warn","ts":"2024-07-23T14:28:33.517734Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-23T14:28:33.18444Z","time spent":"333.28938ms","remote":"127.0.0.1:45778","response type":"/etcdserverpb.KV/Range","request count":0,"request size":33,"response count":1,"response size":5768,"request content":"key:\"/registry/minions/addons-140056\" "}
	{"level":"warn","ts":"2024-07-23T14:28:33.517831Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"340.132113ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourcequotas/local-path-storage/\" range_end:\"/registry/resourcequotas/local-path-storage0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-23T14:28:33.517856Z","caller":"traceutil/trace.go:171","msg":"trace[2070602465] range","detail":"{range_begin:/registry/resourcequotas/local-path-storage/; range_end:/registry/resourcequotas/local-path-storage0; response_count:0; response_revision:378; }","duration":"340.155383ms","start":"2024-07-23T14:28:33.177692Z","end":"2024-07-23T14:28:33.517847Z","steps":["trace[2070602465] 'agreement among raft nodes before linearized reading'  (duration: 248.605034ms)","trace[2070602465] 'get authentication metadata'  (duration: 40.52238ms)","trace[2070602465] 'range keys from in-memory index tree'  (duration: 51.002582ms)"],"step_count":3}
	{"level":"warn","ts":"2024-07-23T14:28:33.517875Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-23T14:28:33.177679Z","time spent":"340.189394ms","remote":"127.0.0.1:45706","response type":"/etcdserverpb.KV/Range","request count":0,"request size":92,"response count":0,"response size":29,"request content":"key:\"/registry/resourcequotas/local-path-storage/\" range_end:\"/registry/resourcequotas/local-path-storage0\" "}
	{"level":"warn","ts":"2024-07-23T14:28:33.51797Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"340.432039ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/minikube-ingress-dns\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-23T14:28:33.517994Z","caller":"traceutil/trace.go:171","msg":"trace[192733837] range","detail":"{range_begin:/registry/clusterroles/minikube-ingress-dns; range_end:; response_count:0; response_revision:378; }","duration":"340.455448ms","start":"2024-07-23T14:28:33.177531Z","end":"2024-07-23T14:28:33.517987Z","steps":["trace[192733837] 'agreement among raft nodes before linearized reading'  (duration: 248.76813ms)","trace[192733837] 'get authentication metadata'  (duration: 40.524891ms)","trace[192733837] 'range keys from in-memory index tree'  (duration: 51.136999ms)"],"step_count":3}
	{"level":"warn","ts":"2024-07-23T14:28:33.518011Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-23T14:28:33.1775Z","time spent":"340.505886ms","remote":"127.0.0.1:45920","response type":"/etcdserverpb.KV/Range","request count":0,"request size":45,"response count":0,"response size":29,"request content":"key:\"/registry/clusterroles/minikube-ingress-dns\" "}
	{"level":"warn","ts":"2024-07-23T14:28:33.534185Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"393.523729ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ranges/serviceips\" ","response":"range_response_count:1 size:116"}
	{"level":"info","ts":"2024-07-23T14:28:33.534978Z","caller":"traceutil/trace.go:171","msg":"trace[1309546410] range","detail":"{range_begin:/registry/ranges/serviceips; range_end:; response_count:1; response_revision:378; }","duration":"394.570692ms","start":"2024-07-23T14:28:33.140386Z","end":"2024-07-23T14:28:33.534957Z","steps":["trace[1309546410] 'agreement among raft nodes before linearized reading'  (duration: 285.915848ms)","trace[1309546410] 'get authentication metadata'  (duration: 40.527147ms)","trace[1309546410] 'range keys from in-memory index tree'  (duration: 67.057866ms)"],"step_count":3}
	{"level":"warn","ts":"2024-07-23T14:28:33.549499Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-23T14:28:33.140372Z","time spent":"409.067647ms","remote":"127.0.0.1:45676","response type":"/etcdserverpb.KV/Range","request count":0,"request size":29,"response count":1,"response size":140,"request content":"key:\"/registry/ranges/serviceips\" "}
	{"level":"warn","ts":"2024-07-23T14:28:33.549764Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"409.680274ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/gadget\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-23T14:28:33.549821Z","caller":"traceutil/trace.go:171","msg":"trace[707520520] range","detail":"{range_begin:/registry/namespaces/gadget; range_end:; response_count:0; response_revision:378; }","duration":"409.739204ms","start":"2024-07-23T14:28:33.140068Z","end":"2024-07-23T14:28:33.549807Z","steps":["trace[707520520] 'agreement among raft nodes before linearized reading'  (duration: 286.236196ms)","trace[707520520] 'get authentication metadata'  (duration: 40.540702ms)","trace[707520520] 'range keys from in-memory index tree'  (duration: 82.89408ms)"],"step_count":3}
	{"level":"warn","ts":"2024-07-23T14:28:33.549845Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-23T14:28:33.14003Z","time spent":"409.810401ms","remote":"127.0.0.1:45730","response type":"/etcdserverpb.KV/Range","request count":0,"request size":29,"response count":0,"response size":29,"request content":"key:\"/registry/namespaces/gadget\" "}
	{"level":"info","ts":"2024-07-23T14:28:33.550585Z","caller":"traceutil/trace.go:171","msg":"trace[1112847054] transaction","detail":"{read_only:false; response_revision:379; number_of_response:1; }","duration":"145.168052ms","start":"2024-07-23T14:28:33.405406Z","end":"2024-07-23T14:28:33.550574Z","steps":["trace[1112847054] 'process raft request'  (duration: 65.881237ms)","trace[1112847054] 'compare'  (duration: 40.669967ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-23T14:28:33.788267Z","caller":"traceutil/trace.go:171","msg":"trace[1084376016] linearizableReadLoop","detail":"{readStateIndex:399; appliedIndex:398; }","duration":"116.889816ms","start":"2024-07-23T14:28:33.671361Z","end":"2024-07-23T14:28:33.788251Z","steps":["trace[1084376016] 'read index received'  (duration: 70.288691ms)","trace[1084376016] 'applied index is now lower than readState.Index'  (duration: 46.600567ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-23T14:28:33.788735Z","caller":"traceutil/trace.go:171","msg":"trace[1989678502] transaction","detail":"{read_only:false; response_revision:387; number_of_response:1; }","duration":"119.205454ms","start":"2024-07-23T14:28:33.669516Z","end":"2024-07-23T14:28:33.788722Z","steps":["trace[1989678502] 'process raft request'  (duration: 72.223754ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-23T14:28:33.789069Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"117.688315ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-07-23T14:28:33.789141Z","caller":"traceutil/trace.go:171","msg":"trace[1274984649] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:387; }","duration":"117.775751ms","start":"2024-07-23T14:28:33.671357Z","end":"2024-07-23T14:28:33.789133Z","steps":["trace[1274984649] 'agreement among raft nodes before linearized reading'  (duration: 117.620861ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-23T14:28:33.793176Z","caller":"traceutil/trace.go:171","msg":"trace[137792574] transaction","detail":"{read_only:false; response_revision:388; number_of_response:1; }","duration":"121.448591ms","start":"2024-07-23T14:28:33.671712Z","end":"2024-07-23T14:28:33.793161Z","steps":["trace[137792574] 'process raft request'  (duration: 116.495735ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-23T14:28:33.811111Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"139.317313ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourcequotas/gadget/\" range_end:\"/registry/resourcequotas/gadget0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-23T14:28:33.811335Z","caller":"traceutil/trace.go:171","msg":"trace[783303961] range","detail":"{range_begin:/registry/resourcequotas/gadget/; range_end:/registry/resourcequotas/gadget0; response_count:0; response_revision:391; }","duration":"139.54994ms","start":"2024-07-23T14:28:33.671771Z","end":"2024-07-23T14:28:33.811321Z","steps":["trace[783303961] 'agreement among raft nodes before linearized reading'  (duration: 139.26329ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-23T14:28:33.811647Z","caller":"traceutil/trace.go:171","msg":"trace[1273456742] transaction","detail":"{read_only:false; response_revision:389; number_of_response:1; }","duration":"139.727542ms","start":"2024-07-23T14:28:33.671885Z","end":"2024-07-23T14:28:33.811612Z","steps":["trace[1273456742] 'process raft request'  (duration: 121.267305ms)","trace[1273456742] 'compare'  (duration: 17.678566ms)"],"step_count":2}
	
	
	==> gcp-auth [6bdf0ac15bdda759dbe5d8fd617a965b1af7a9de1f0cda30b2720d03bca35ce9] <==
	2024/07/23 14:30:10 GCP Auth Webhook started!
	2024/07/23 14:31:10 Ready to marshal response ...
	2024/07/23 14:31:10 Ready to write response ...
	2024/07/23 14:31:10 Ready to marshal response ...
	2024/07/23 14:31:10 Ready to write response ...
	2024/07/23 14:31:10 Ready to marshal response ...
	2024/07/23 14:31:10 Ready to write response ...
	2024/07/23 14:31:21 Ready to marshal response ...
	2024/07/23 14:31:21 Ready to write response ...
	2024/07/23 14:31:26 Ready to marshal response ...
	2024/07/23 14:31:26 Ready to write response ...
	2024/07/23 14:31:26 Ready to marshal response ...
	2024/07/23 14:31:26 Ready to write response ...
	2024/07/23 14:31:40 Ready to marshal response ...
	2024/07/23 14:31:40 Ready to write response ...
	2024/07/23 14:31:48 Ready to marshal response ...
	2024/07/23 14:31:48 Ready to write response ...
	2024/07/23 14:31:56 Ready to marshal response ...
	2024/07/23 14:31:56 Ready to write response ...
	2024/07/23 14:32:32 Ready to marshal response ...
	2024/07/23 14:32:32 Ready to write response ...
	2024/07/23 14:34:52 Ready to marshal response ...
	2024/07/23 14:34:52 Ready to write response ...
	
	
	==> kernel <==
	 14:35:03 up 23:17,  0 users,  load average: 0.13, 1.03, 1.80
	Linux addons-140056 5.15.0-1065-aws #71~20.04.1-Ubuntu SMP Fri Jun 28 19:59:49 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [bdb361c9cd9a14e56cb6d358f6ab2176e483982c5537b445da9ad7f02031b57f] <==
	E0723 14:33:48.003312       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0723 14:33:54.712055       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0723 14:33:54.712088       1 main.go:299] handling current node
	I0723 14:34:04.711672       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0723 14:34:04.711705       1 main.go:299] handling current node
	W0723 14:34:05.605320       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0723 14:34:05.605356       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	W0723 14:34:06.299169       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0723 14:34:06.299201       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0723 14:34:14.711771       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0723 14:34:14.711807       1 main.go:299] handling current node
	W0723 14:34:24.436827       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0723 14:34:24.436952       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0723 14:34:24.711560       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0723 14:34:24.711596       1 main.go:299] handling current node
	I0723 14:34:34.712050       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0723 14:34:34.712084       1 main.go:299] handling current node
	W0723 14:34:38.821182       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0723 14:34:38.821220       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0723 14:34:44.711242       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0723 14:34:44.711278       1 main.go:299] handling current node
	W0723 14:34:51.760553       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0723 14:34:51.760590       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0723 14:34:54.711088       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0723 14:34:54.711224       1 main.go:299] handling current node
	
	
	==> kube-apiserver [a958daba0b9ba0c8d9d5c5311f8098a5bc3c2438bfa54b6e90f10aa09e37fd91] <==
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0723 14:30:40.643768       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0723 14:30:40.659160       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object has been modified; please apply your changes to the latest version and try again
	I0723 14:31:10.865195       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.101.151.250"}
	I0723 14:31:52.119881       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0723 14:32:04.961398       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0723 14:32:11.293469       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0723 14:32:11.293610       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0723 14:32:11.330391       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0723 14:32:11.330447       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0723 14:32:11.351168       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0723 14:32:11.351210       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0723 14:32:11.351517       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0723 14:32:11.351550       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0723 14:32:11.442027       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0723 14:32:11.442188       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0723 14:32:12.352097       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0723 14:32:12.443078       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0723 14:32:12.460658       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0723 14:32:18.104625       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0723 14:32:19.148375       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0723 14:32:32.567536       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0723 14:32:32.900794       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.103.2.199"}
	I0723 14:34:52.485161       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.104.158.85"}
	E0723 14:34:54.125414       1 watch.go:250] http2: stream closed
	
	
	==> kube-controller-manager [a58f73816b73096e221fa39bf12174e422e029bcf2493236fd089866bc393967] <==
	W0723 14:33:35.128024       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0723 14:33:35.128062       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0723 14:33:41.576235       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0723 14:33:41.576363       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0723 14:34:08.647299       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0723 14:34:08.647337       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0723 14:34:24.013112       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0723 14:34:24.013152       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0723 14:34:24.199317       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0723 14:34:24.199355       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0723 14:34:27.826106       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0723 14:34:27.826146       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0723 14:34:52.306199       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="74.292019ms"
	I0723 14:34:52.337038       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="30.788694ms"
	I0723 14:34:52.338001       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="35.479µs"
	I0723 14:34:52.338215       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="26.101µs"
	I0723 14:34:54.114641       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="19.957032ms"
	I0723 14:34:54.114739       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="52.423µs"
	I0723 14:34:54.775158       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create"
	I0723 14:34:54.784036       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-6d9bd977d4" duration="6.211µs"
	I0723 14:34:54.784358       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	W0723 14:34:56.965082       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0723 14:34:56.965122       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0723 14:34:59.787914       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0723 14:34:59.787954       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [82396ebc6d4766b0d7b5d3fd339d8d061a527ae22c008086b632902c95850437] <==
	I0723 14:28:34.497062       1 server_linux.go:69] "Using iptables proxy"
	I0723 14:28:34.642144       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0723 14:28:35.096609       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0723 14:28:35.096734       1 server_linux.go:165] "Using iptables Proxier"
	I0723 14:28:35.099079       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0723 14:28:35.099178       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0723 14:28:35.099228       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0723 14:28:35.099473       1 server.go:872] "Version info" version="v1.30.3"
	I0723 14:28:35.100177       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0723 14:28:35.101784       1 config.go:192] "Starting service config controller"
	I0723 14:28:35.103914       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0723 14:28:35.104029       1 config.go:101] "Starting endpoint slice config controller"
	I0723 14:28:35.104060       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0723 14:28:35.104570       1 config.go:319] "Starting node config controller"
	I0723 14:28:35.104621       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0723 14:28:35.205757       1 shared_informer.go:320] Caches are synced for node config
	I0723 14:28:35.205874       1 shared_informer.go:320] Caches are synced for service config
	I0723 14:28:35.205967       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [54c3777af6f927748ddcbd45b75cad5d3d1a83ae959204c489cb9fd6611b442d] <==
	W0723 14:28:12.775403       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0723 14:28:12.775456       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0723 14:28:12.775548       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0723 14:28:12.775586       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0723 14:28:12.776652       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0723 14:28:12.776728       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0723 14:28:13.644066       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0723 14:28:13.644106       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0723 14:28:13.652350       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0723 14:28:13.652467       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0723 14:28:13.710337       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0723 14:28:13.710445       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0723 14:28:13.746321       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0723 14:28:13.746447       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0723 14:28:13.819200       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0723 14:28:13.819332       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0723 14:28:13.843767       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0723 14:28:13.843882       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0723 14:28:13.885360       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0723 14:28:13.885474       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0723 14:28:13.943638       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0723 14:28:13.943760       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0723 14:28:14.167465       1 reflector.go:547] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0723 14:28:14.167591       1 reflector.go:150] runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0723 14:28:15.853918       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 23 14:34:52 addons-140056 kubelet[1548]: I0723 14:34:52.295262    1548 memory_manager.go:354] "RemoveStaleState removing state" podUID="c203407c-54b8-4dff-8002-5661f1205ce1" containerName="gadget"
	Jul 23 14:34:52 addons-140056 kubelet[1548]: I0723 14:34:52.485848    1548 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/f4b60e7c-d935-43e7-aa68-1ecedc3cd1c1-gcp-creds\") pod \"hello-world-app-6778b5fc9f-4zn9v\" (UID: \"f4b60e7c-d935-43e7-aa68-1ecedc3cd1c1\") " pod="default/hello-world-app-6778b5fc9f-4zn9v"
	Jul 23 14:34:52 addons-140056 kubelet[1548]: I0723 14:34:52.485914    1548 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmktg\" (UniqueName: \"kubernetes.io/projected/f4b60e7c-d935-43e7-aa68-1ecedc3cd1c1-kube-api-access-nmktg\") pod \"hello-world-app-6778b5fc9f-4zn9v\" (UID: \"f4b60e7c-d935-43e7-aa68-1ecedc3cd1c1\") " pod="default/hello-world-app-6778b5fc9f-4zn9v"
	Jul 23 14:34:53 addons-140056 kubelet[1548]: I0723 14:34:53.594587    1548 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2jlj\" (UniqueName: \"kubernetes.io/projected/f19d23b6-9b9b-4771-aeaf-40a41665b578-kube-api-access-f2jlj\") pod \"f19d23b6-9b9b-4771-aeaf-40a41665b578\" (UID: \"f19d23b6-9b9b-4771-aeaf-40a41665b578\") "
	Jul 23 14:34:53 addons-140056 kubelet[1548]: I0723 14:34:53.596484    1548 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f19d23b6-9b9b-4771-aeaf-40a41665b578-kube-api-access-f2jlj" (OuterVolumeSpecName: "kube-api-access-f2jlj") pod "f19d23b6-9b9b-4771-aeaf-40a41665b578" (UID: "f19d23b6-9b9b-4771-aeaf-40a41665b578"). InnerVolumeSpecName "kube-api-access-f2jlj". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 23 14:34:53 addons-140056 kubelet[1548]: I0723 14:34:53.695049    1548 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-f2jlj\" (UniqueName: \"kubernetes.io/projected/f19d23b6-9b9b-4771-aeaf-40a41665b578-kube-api-access-f2jlj\") on node \"addons-140056\" DevicePath \"\""
	Jul 23 14:34:54 addons-140056 kubelet[1548]: I0723 14:34:54.079416    1548 scope.go:117] "RemoveContainer" containerID="776d4e5af28b66f3115339c3aea556a2c8afc95296d968a80f950f41013e30da"
	Jul 23 14:34:54 addons-140056 kubelet[1548]: I0723 14:34:54.109555    1548 scope.go:117] "RemoveContainer" containerID="776d4e5af28b66f3115339c3aea556a2c8afc95296d968a80f950f41013e30da"
	Jul 23 14:34:54 addons-140056 kubelet[1548]: E0723 14:34:54.110022    1548 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"776d4e5af28b66f3115339c3aea556a2c8afc95296d968a80f950f41013e30da\": container with ID starting with 776d4e5af28b66f3115339c3aea556a2c8afc95296d968a80f950f41013e30da not found: ID does not exist" containerID="776d4e5af28b66f3115339c3aea556a2c8afc95296d968a80f950f41013e30da"
	Jul 23 14:34:54 addons-140056 kubelet[1548]: I0723 14:34:54.110060    1548 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"776d4e5af28b66f3115339c3aea556a2c8afc95296d968a80f950f41013e30da"} err="failed to get container status \"776d4e5af28b66f3115339c3aea556a2c8afc95296d968a80f950f41013e30da\": rpc error: code = NotFound desc = could not find container \"776d4e5af28b66f3115339c3aea556a2c8afc95296d968a80f950f41013e30da\": container with ID starting with 776d4e5af28b66f3115339c3aea556a2c8afc95296d968a80f950f41013e30da not found: ID does not exist"
	Jul 23 14:34:54 addons-140056 kubelet[1548]: I0723 14:34:54.119273    1548 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-6778b5fc9f-4zn9v" podStartSLOduration=1.114269883 podStartE2EDuration="2.119250435s" podCreationTimestamp="2024-07-23 14:34:52 +0000 UTC" firstStartedPulling="2024-07-23 14:34:52.947443402 +0000 UTC m=+397.880451030" lastFinishedPulling="2024-07-23 14:34:53.952423946 +0000 UTC m=+398.885431582" observedRunningTime="2024-07-23 14:34:54.09391957 +0000 UTC m=+399.026927198" watchObservedRunningTime="2024-07-23 14:34:54.119250435 +0000 UTC m=+399.052258071"
	Jul 23 14:34:55 addons-140056 kubelet[1548]: I0723 14:34:55.185444    1548 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb0b8c70-ac76-4b45-8cdd-c2051a9fa367" path="/var/lib/kubelet/pods/bb0b8c70-ac76-4b45-8cdd-c2051a9fa367/volumes"
	Jul 23 14:34:55 addons-140056 kubelet[1548]: I0723 14:34:55.185845    1548 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f19d23b6-9b9b-4771-aeaf-40a41665b578" path="/var/lib/kubelet/pods/f19d23b6-9b9b-4771-aeaf-40a41665b578/volumes"
	Jul 23 14:34:55 addons-140056 kubelet[1548]: I0723 14:34:55.186176    1548 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc6d5c36-cff9-498c-811a-2240804cadf4" path="/var/lib/kubelet/pods/fc6d5c36-cff9-498c-811a-2240804cadf4/volumes"
	Jul 23 14:34:58 addons-140056 kubelet[1548]: I0723 14:34:58.023886    1548 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a96d060b-4b9d-411e-9243-066225274171-webhook-cert\") pod \"a96d060b-4b9d-411e-9243-066225274171\" (UID: \"a96d060b-4b9d-411e-9243-066225274171\") "
	Jul 23 14:34:58 addons-140056 kubelet[1548]: I0723 14:34:58.023961    1548 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4dqz\" (UniqueName: \"kubernetes.io/projected/a96d060b-4b9d-411e-9243-066225274171-kube-api-access-b4dqz\") pod \"a96d060b-4b9d-411e-9243-066225274171\" (UID: \"a96d060b-4b9d-411e-9243-066225274171\") "
	Jul 23 14:34:58 addons-140056 kubelet[1548]: I0723 14:34:58.026359    1548 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a96d060b-4b9d-411e-9243-066225274171-kube-api-access-b4dqz" (OuterVolumeSpecName: "kube-api-access-b4dqz") pod "a96d060b-4b9d-411e-9243-066225274171" (UID: "a96d060b-4b9d-411e-9243-066225274171"). InnerVolumeSpecName "kube-api-access-b4dqz". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 23 14:34:58 addons-140056 kubelet[1548]: I0723 14:34:58.029292    1548 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a96d060b-4b9d-411e-9243-066225274171-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "a96d060b-4b9d-411e-9243-066225274171" (UID: "a96d060b-4b9d-411e-9243-066225274171"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 23 14:34:58 addons-140056 kubelet[1548]: I0723 14:34:58.089875    1548 scope.go:117] "RemoveContainer" containerID="2ab9cb1b6a6e93dfe4f80d4c8da60bf7b3fcd755d45285bfc12df244cf9e0c03"
	Jul 23 14:34:58 addons-140056 kubelet[1548]: I0723 14:34:58.106114    1548 scope.go:117] "RemoveContainer" containerID="2ab9cb1b6a6e93dfe4f80d4c8da60bf7b3fcd755d45285bfc12df244cf9e0c03"
	Jul 23 14:34:58 addons-140056 kubelet[1548]: E0723 14:34:58.106508    1548 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ab9cb1b6a6e93dfe4f80d4c8da60bf7b3fcd755d45285bfc12df244cf9e0c03\": container with ID starting with 2ab9cb1b6a6e93dfe4f80d4c8da60bf7b3fcd755d45285bfc12df244cf9e0c03 not found: ID does not exist" containerID="2ab9cb1b6a6e93dfe4f80d4c8da60bf7b3fcd755d45285bfc12df244cf9e0c03"
	Jul 23 14:34:58 addons-140056 kubelet[1548]: I0723 14:34:58.106606    1548 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ab9cb1b6a6e93dfe4f80d4c8da60bf7b3fcd755d45285bfc12df244cf9e0c03"} err="failed to get container status \"2ab9cb1b6a6e93dfe4f80d4c8da60bf7b3fcd755d45285bfc12df244cf9e0c03\": rpc error: code = NotFound desc = could not find container \"2ab9cb1b6a6e93dfe4f80d4c8da60bf7b3fcd755d45285bfc12df244cf9e0c03\": container with ID starting with 2ab9cb1b6a6e93dfe4f80d4c8da60bf7b3fcd755d45285bfc12df244cf9e0c03 not found: ID does not exist"
	Jul 23 14:34:58 addons-140056 kubelet[1548]: I0723 14:34:58.125054    1548 reconciler_common.go:289] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a96d060b-4b9d-411e-9243-066225274171-webhook-cert\") on node \"addons-140056\" DevicePath \"\""
	Jul 23 14:34:58 addons-140056 kubelet[1548]: I0723 14:34:58.125087    1548 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-b4dqz\" (UniqueName: \"kubernetes.io/projected/a96d060b-4b9d-411e-9243-066225274171-kube-api-access-b4dqz\") on node \"addons-140056\" DevicePath \"\""
	Jul 23 14:34:59 addons-140056 kubelet[1548]: I0723 14:34:59.185859    1548 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a96d060b-4b9d-411e-9243-066225274171" path="/var/lib/kubelet/pods/a96d060b-4b9d-411e-9243-066225274171/volumes"
	
	
	==> storage-provisioner [d7535c8a235c47e1aa307559567967c7bd5c1404f060448c5dada7cf0456bd1d] <==
	I0723 14:29:16.022497       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0723 14:29:16.036719       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0723 14:29:16.038040       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0723 14:29:16.053114       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0723 14:29:16.053383       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-140056_ae590010-2694-4b97-853e-4227fb1b1c3c!
	I0723 14:29:16.054350       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"24be5b02-8199-4cac-8c8b-78a4be38111e", APIVersion:"v1", ResourceVersion:"898", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-140056_ae590010-2694-4b97-853e-4227fb1b1c3c became leader
	I0723 14:29:16.154627       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-140056_ae590010-2694-4b97-853e-4227fb1b1c3c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-140056 -n addons-140056
helpers_test.go:261: (dbg) Run:  kubectl --context addons-140056 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (152.06s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (317.84s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.496112ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-ql9z2" [624cee58-45f6-4199-bfae-0fb883077e3f] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004165103s
addons_test.go:417: (dbg) Run:  kubectl --context addons-140056 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-140056 top pods -n kube-system: exit status 1 (96.063501ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-jgz96, age: 4m0.47177247s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-140056 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-140056 top pods -n kube-system: exit status 1 (84.498233ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-jgz96, age: 4m2.25458931s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-140056 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-140056 top pods -n kube-system: exit status 1 (103.627096ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-jgz96, age: 4m5.909281258s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-140056 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-140056 top pods -n kube-system: exit status 1 (88.009241ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-jgz96, age: 4m15.216984133s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-140056 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-140056 top pods -n kube-system: exit status 1 (97.46437ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-jgz96, age: 4m26.230887471s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-140056 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-140056 top pods -n kube-system: exit status 1 (83.269335ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-jgz96, age: 4m38.39784887s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-140056 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-140056 top pods -n kube-system: exit status 1 (86.223389ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-jgz96, age: 4m54.413469304s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-140056 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-140056 top pods -n kube-system: exit status 1 (90.520834ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-jgz96, age: 5m45.27234213s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-140056 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-140056 top pods -n kube-system: exit status 1 (94.038431ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-jgz96, age: 6m22.374081545s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-140056 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-140056 top pods -n kube-system: exit status 1 (94.054249ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-jgz96, age: 7m7.083511479s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-140056 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-140056 top pods -n kube-system: exit status 1 (96.430965ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-jgz96, age: 8m29.46986493s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-140056 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-140056 top pods -n kube-system: exit status 1 (91.631732ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-jgz96, age: 9m9.08362335s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-arm64 -p addons-140056 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/MetricsServer]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-140056
helpers_test.go:235: (dbg) docker inspect addons-140056:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b9b70e7c5302d0eb24d99157f9235e3b557101f19b1860e598af9393850f4004",
	        "Created": "2024-07-23T14:27:52.86795676Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 3324574,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-07-23T14:27:53.006461012Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:71a7ac3dcc1f66f9b927c200bbaca5de093c77584a8e2cceb20f7c37b7028780",
	        "ResolvConfPath": "/var/lib/docker/containers/b9b70e7c5302d0eb24d99157f9235e3b557101f19b1860e598af9393850f4004/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b9b70e7c5302d0eb24d99157f9235e3b557101f19b1860e598af9393850f4004/hostname",
	        "HostsPath": "/var/lib/docker/containers/b9b70e7c5302d0eb24d99157f9235e3b557101f19b1860e598af9393850f4004/hosts",
	        "LogPath": "/var/lib/docker/containers/b9b70e7c5302d0eb24d99157f9235e3b557101f19b1860e598af9393850f4004/b9b70e7c5302d0eb24d99157f9235e3b557101f19b1860e598af9393850f4004-json.log",
	        "Name": "/addons-140056",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-140056:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-140056",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/6a6bb241bc0a3e4465e1a43aee1f75c8fc97f694270fdd11e35d031c22d4e2f5-init/diff:/var/lib/docker/overlay2/cc3f8b49bb50b989dafe94ead705091dcc80edbdd409e161d5028bc93b57b742/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6a6bb241bc0a3e4465e1a43aee1f75c8fc97f694270fdd11e35d031c22d4e2f5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6a6bb241bc0a3e4465e1a43aee1f75c8fc97f694270fdd11e35d031c22d4e2f5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6a6bb241bc0a3e4465e1a43aee1f75c8fc97f694270fdd11e35d031c22d4e2f5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-140056",
	                "Source": "/var/lib/docker/volumes/addons-140056/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-140056",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-140056",
	                "name.minikube.sigs.k8s.io": "addons-140056",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6c675d27c4aef7dfa8be6dc67cd724b8c6f2d1428cbca9863b30b6f781624761",
	            "SandboxKey": "/var/run/docker/netns/6c675d27c4ae",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "37152"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "37153"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "37156"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "37154"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "37155"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-140056": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "e79c422ee0f203263c66e8d1be99ca58a269a2578e91f8ab8004aa4f5b89e281",
	                    "EndpointID": "0b5c2016faff0288e1cfa9c7c76c429ffdf9591e99e7f78251d58736438d6377",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-140056",
	                        "b9b70e7c5302"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-140056 -n addons-140056
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-140056 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-140056 logs -n 25: (1.555147692s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-438325                                                                     | download-only-438325   | jenkins | v1.33.1 | 23 Jul 24 14:27 UTC | 23 Jul 24 14:27 UTC |
	| delete  | -p download-only-292108                                                                     | download-only-292108   | jenkins | v1.33.1 | 23 Jul 24 14:27 UTC | 23 Jul 24 14:27 UTC |
	| delete  | -p download-only-547065                                                                     | download-only-547065   | jenkins | v1.33.1 | 23 Jul 24 14:27 UTC | 23 Jul 24 14:27 UTC |
	| start   | --download-only -p                                                                          | download-docker-248386 | jenkins | v1.33.1 | 23 Jul 24 14:27 UTC |                     |
	|         | download-docker-248386                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-248386                                                                   | download-docker-248386 | jenkins | v1.33.1 | 23 Jul 24 14:27 UTC | 23 Jul 24 14:27 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-953180   | jenkins | v1.33.1 | 23 Jul 24 14:27 UTC |                     |
	|         | binary-mirror-953180                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:39823                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-953180                                                                     | binary-mirror-953180   | jenkins | v1.33.1 | 23 Jul 24 14:27 UTC | 23 Jul 24 14:27 UTC |
	| addons  | enable dashboard -p                                                                         | addons-140056          | jenkins | v1.33.1 | 23 Jul 24 14:27 UTC |                     |
	|         | addons-140056                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-140056          | jenkins | v1.33.1 | 23 Jul 24 14:27 UTC |                     |
	|         | addons-140056                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-140056 --wait=true                                                                | addons-140056          | jenkins | v1.33.1 | 23 Jul 24 14:27 UTC | 23 Jul 24 14:31 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-140056          | jenkins | v1.33.1 | 23 Jul 24 14:31 UTC | 23 Jul 24 14:31 UTC |
	|         | -p addons-140056                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-140056 ip                                                                            | addons-140056          | jenkins | v1.33.1 | 23 Jul 24 14:31 UTC | 23 Jul 24 14:31 UTC |
	| addons  | addons-140056 addons disable                                                                | addons-140056          | jenkins | v1.33.1 | 23 Jul 24 14:31 UTC | 23 Jul 24 14:31 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-140056          | jenkins | v1.33.1 | 23 Jul 24 14:31 UTC | 23 Jul 24 14:31 UTC |
	|         | -p addons-140056                                                                            |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-140056          | jenkins | v1.33.1 | 23 Jul 24 14:31 UTC | 23 Jul 24 14:31 UTC |
	|         | addons-140056                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-140056 ssh cat                                                                       | addons-140056          | jenkins | v1.33.1 | 23 Jul 24 14:31 UTC | 23 Jul 24 14:31 UTC |
	|         | /opt/local-path-provisioner/pvc-4719a5dc-20ce-42e3-9843-cd46009709ea_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-140056 addons disable                                                                | addons-140056          | jenkins | v1.33.1 | 23 Jul 24 14:31 UTC | 23 Jul 24 14:32 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-140056 addons                                                                        | addons-140056          | jenkins | v1.33.1 | 23 Jul 24 14:32 UTC | 23 Jul 24 14:32 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-140056 addons                                                                        | addons-140056          | jenkins | v1.33.1 | 23 Jul 24 14:32 UTC | 23 Jul 24 14:32 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-140056          | jenkins | v1.33.1 | 23 Jul 24 14:32 UTC | 23 Jul 24 14:32 UTC |
	|         | addons-140056                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-140056 ssh curl -s                                                                   | addons-140056          | jenkins | v1.33.1 | 23 Jul 24 14:32 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-140056 ip                                                                            | addons-140056          | jenkins | v1.33.1 | 23 Jul 24 14:34 UTC | 23 Jul 24 14:34 UTC |
	| addons  | addons-140056 addons disable                                                                | addons-140056          | jenkins | v1.33.1 | 23 Jul 24 14:34 UTC | 23 Jul 24 14:34 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-140056 addons disable                                                                | addons-140056          | jenkins | v1.33.1 | 23 Jul 24 14:34 UTC | 23 Jul 24 14:35 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-140056 addons                                                                        | addons-140056          | jenkins | v1.33.1 | 23 Jul 24 14:37 UTC | 23 Jul 24 14:37 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/23 14:27:28
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0723 14:27:28.676695 3324089 out.go:291] Setting OutFile to fd 1 ...
	I0723 14:27:28.676884 3324089 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 14:27:28.676913 3324089 out.go:304] Setting ErrFile to fd 2...
	I0723 14:27:28.676933 3324089 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 14:27:28.677188 3324089 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19319-3317687/.minikube/bin
	I0723 14:27:28.677639 3324089 out.go:298] Setting JSON to false
	I0723 14:27:28.678557 3324089 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":83395,"bootTime":1721661454,"procs":154,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0723 14:27:28.678658 3324089 start.go:139] virtualization:  
	I0723 14:27:28.681154 3324089 out.go:177] * [addons-140056] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0723 14:27:28.683405 3324089 out.go:177]   - MINIKUBE_LOCATION=19319
	I0723 14:27:28.683575 3324089 notify.go:220] Checking for updates...
	I0723 14:27:28.687447 3324089 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0723 14:27:28.689253 3324089 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19319-3317687/kubeconfig
	I0723 14:27:28.690933 3324089 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19319-3317687/.minikube
	I0723 14:27:28.692548 3324089 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0723 14:27:28.694491 3324089 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0723 14:27:28.696530 3324089 driver.go:392] Setting default libvirt URI to qemu:///system
	I0723 14:27:28.717550 3324089 docker.go:123] docker version: linux-27.1.0:Docker Engine - Community
	I0723 14:27:28.717682 3324089 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0723 14:27:28.779563 3324089 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-07-23 14:27:28.76973313 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1065-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.0]] Warnings:<nil>}}
	I0723 14:27:28.779685 3324089 docker.go:307] overlay module found
	I0723 14:27:28.781695 3324089 out.go:177] * Using the docker driver based on user configuration
	I0723 14:27:28.783669 3324089 start.go:297] selected driver: docker
	I0723 14:27:28.783686 3324089 start.go:901] validating driver "docker" against <nil>
	I0723 14:27:28.783700 3324089 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0723 14:27:28.784352 3324089 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0723 14:27:28.838679 3324089 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-07-23 14:27:28.830269269 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1065-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.0]] Warnings:<nil>}}
	I0723 14:27:28.838848 3324089 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0723 14:27:28.839084 3324089 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0723 14:27:28.841437 3324089 out.go:177] * Using Docker driver with root privileges
	I0723 14:27:28.843754 3324089 cni.go:84] Creating CNI manager for ""
	I0723 14:27:28.843775 3324089 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0723 14:27:28.843787 3324089 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0723 14:27:28.843870 3324089 start.go:340] cluster config:
	{Name:addons-140056 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-140056 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 14:27:28.846142 3324089 out.go:177] * Starting "addons-140056" primary control-plane node in "addons-140056" cluster
	I0723 14:27:28.847849 3324089 cache.go:121] Beginning downloading kic base image for docker with crio
	I0723 14:27:28.849949 3324089 out.go:177] * Pulling base image v0.0.44-1721687125-19319 ...
	I0723 14:27:28.852051 3324089 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0723 14:27:28.852074 3324089 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae in local docker daemon
	I0723 14:27:28.852096 3324089 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19319-3317687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4
	I0723 14:27:28.852104 3324089 cache.go:56] Caching tarball of preloaded images
	I0723 14:27:28.852193 3324089 preload.go:172] Found /home/jenkins/minikube-integration/19319-3317687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0723 14:27:28.852205 3324089 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0723 14:27:28.852570 3324089 profile.go:143] Saving config to /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/config.json ...
	I0723 14:27:28.852602 3324089 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/config.json: {Name:mk80729d63297d5bf8076b3f30a05eb0be283ee2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 14:27:28.866706 3324089 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae to local cache
	I0723 14:27:28.866828 3324089 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae in local cache directory
	I0723 14:27:28.866847 3324089 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae in local cache directory, skipping pull
	I0723 14:27:28.866853 3324089 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae exists in cache, skipping pull
	I0723 14:27:28.866859 3324089 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae as a tarball
	I0723 14:27:28.866864 3324089 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae from local cache
	I0723 14:27:45.818570 3324089 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae from cached tarball
	I0723 14:27:45.818608 3324089 cache.go:194] Successfully downloaded all kic artifacts
	I0723 14:27:45.818653 3324089 start.go:360] acquireMachinesLock for addons-140056: {Name:mk87e835be44b124ffc36d4dd9b3cf7b09db44cb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 14:27:45.819355 3324089 start.go:364] duration metric: took 675.553µs to acquireMachinesLock for "addons-140056"
	I0723 14:27:45.819391 3324089 start.go:93] Provisioning new machine with config: &{Name:addons-140056 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-140056 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0723 14:27:45.819481 3324089 start.go:125] createHost starting for "" (driver="docker")
	I0723 14:27:45.821588 3324089 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0723 14:27:45.821841 3324089 start.go:159] libmachine.API.Create for "addons-140056" (driver="docker")
	I0723 14:27:45.821879 3324089 client.go:168] LocalClient.Create starting
	I0723 14:27:45.821998 3324089 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19319-3317687/.minikube/certs/ca.pem
	I0723 14:27:46.093836 3324089 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19319-3317687/.minikube/certs/cert.pem
	I0723 14:27:46.464651 3324089 cli_runner.go:164] Run: docker network inspect addons-140056 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0723 14:27:46.480067 3324089 cli_runner.go:211] docker network inspect addons-140056 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0723 14:27:46.480159 3324089 network_create.go:284] running [docker network inspect addons-140056] to gather additional debugging logs...
	I0723 14:27:46.480179 3324089 cli_runner.go:164] Run: docker network inspect addons-140056
	W0723 14:27:46.495772 3324089 cli_runner.go:211] docker network inspect addons-140056 returned with exit code 1
	I0723 14:27:46.495804 3324089 network_create.go:287] error running [docker network inspect addons-140056]: docker network inspect addons-140056: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-140056 not found
	I0723 14:27:46.495817 3324089 network_create.go:289] output of [docker network inspect addons-140056]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-140056 not found
	
	** /stderr **
	I0723 14:27:46.495927 3324089 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0723 14:27:46.519578 3324089 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400178c6c0}
	I0723 14:27:46.519624 3324089 network_create.go:124] attempt to create docker network addons-140056 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0723 14:27:46.519686 3324089 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-140056 addons-140056
	I0723 14:27:46.585952 3324089 network_create.go:108] docker network addons-140056 192.168.49.0/24 created
	I0723 14:27:46.585988 3324089 kic.go:121] calculated static IP "192.168.49.2" for the "addons-140056" container
	I0723 14:27:46.586062 3324089 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0723 14:27:46.601629 3324089 cli_runner.go:164] Run: docker volume create addons-140056 --label name.minikube.sigs.k8s.io=addons-140056 --label created_by.minikube.sigs.k8s.io=true
	I0723 14:27:46.619041 3324089 oci.go:103] Successfully created a docker volume addons-140056
	I0723 14:27:46.619143 3324089 cli_runner.go:164] Run: docker run --rm --name addons-140056-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-140056 --entrypoint /usr/bin/test -v addons-140056:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae -d /var/lib
	I0723 14:27:48.625028 3324089 cli_runner.go:217] Completed: docker run --rm --name addons-140056-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-140056 --entrypoint /usr/bin/test -v addons-140056:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae -d /var/lib: (2.005837296s)
	I0723 14:27:48.625062 3324089 oci.go:107] Successfully prepared a docker volume addons-140056
	I0723 14:27:48.625087 3324089 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0723 14:27:48.625106 3324089 kic.go:194] Starting extracting preloaded images to volume ...
	I0723 14:27:48.625207 3324089 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19319-3317687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-140056:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae -I lz4 -xf /preloaded.tar -C /extractDir
	I0723 14:27:52.803707 3324089 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19319-3317687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-140056:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae -I lz4 -xf /preloaded.tar -C /extractDir: (4.178443041s)
	I0723 14:27:52.803742 3324089 kic.go:203] duration metric: took 4.178632163s to extract preloaded images to volume ...
	W0723 14:27:52.803880 3324089 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0723 14:27:52.804000 3324089 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0723 14:27:52.853957 3324089 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-140056 --name addons-140056 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-140056 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-140056 --network addons-140056 --ip 192.168.49.2 --volume addons-140056:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae
	I0723 14:27:53.197662 3324089 cli_runner.go:164] Run: docker container inspect addons-140056 --format={{.State.Running}}
	I0723 14:27:53.222518 3324089 cli_runner.go:164] Run: docker container inspect addons-140056 --format={{.State.Status}}
	I0723 14:27:53.246584 3324089 cli_runner.go:164] Run: docker exec addons-140056 stat /var/lib/dpkg/alternatives/iptables
	I0723 14:27:53.317873 3324089 oci.go:144] the created container "addons-140056" has a running status.
	I0723 14:27:53.317900 3324089 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19319-3317687/.minikube/machines/addons-140056/id_rsa...
	I0723 14:27:53.750712 3324089 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19319-3317687/.minikube/machines/addons-140056/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0723 14:27:53.771604 3324089 cli_runner.go:164] Run: docker container inspect addons-140056 --format={{.State.Status}}
	I0723 14:27:53.791817 3324089 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0723 14:27:53.791836 3324089 kic_runner.go:114] Args: [docker exec --privileged addons-140056 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0723 14:27:53.865928 3324089 cli_runner.go:164] Run: docker container inspect addons-140056 --format={{.State.Status}}
	I0723 14:27:53.890330 3324089 machine.go:94] provisionDockerMachine start ...
	I0723 14:27:53.890421 3324089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-140056
	I0723 14:27:53.914772 3324089 main.go:141] libmachine: Using SSH client type: native
	I0723 14:27:53.915031 3324089 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 37152 <nil> <nil>}
	I0723 14:27:53.915039 3324089 main.go:141] libmachine: About to run SSH command:
	hostname
	I0723 14:27:54.078473 3324089 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-140056
	
	I0723 14:27:54.078495 3324089 ubuntu.go:169] provisioning hostname "addons-140056"
	I0723 14:27:54.078587 3324089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-140056
	I0723 14:27:54.098059 3324089 main.go:141] libmachine: Using SSH client type: native
	I0723 14:27:54.098297 3324089 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 37152 <nil> <nil>}
	I0723 14:27:54.098309 3324089 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-140056 && echo "addons-140056" | sudo tee /etc/hostname
	I0723 14:27:54.245063 3324089 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-140056
	
	I0723 14:27:54.245156 3324089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-140056
	I0723 14:27:54.268308 3324089 main.go:141] libmachine: Using SSH client type: native
	I0723 14:27:54.268555 3324089 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 37152 <nil> <nil>}
	I0723 14:27:54.268576 3324089 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-140056' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-140056/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-140056' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0723 14:27:54.406555 3324089 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0723 14:27:54.406644 3324089 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19319-3317687/.minikube CaCertPath:/home/jenkins/minikube-integration/19319-3317687/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19319-3317687/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19319-3317687/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19319-3317687/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19319-3317687/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19319-3317687/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19319-3317687/.minikube}
	I0723 14:27:54.406702 3324089 ubuntu.go:177] setting up certificates
	I0723 14:27:54.406734 3324089 provision.go:84] configureAuth start
	I0723 14:27:54.406846 3324089 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-140056
	I0723 14:27:54.422125 3324089 provision.go:143] copyHostCerts
	I0723 14:27:54.422200 3324089 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-3317687/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19319-3317687/.minikube/ca.pem (1082 bytes)
	I0723 14:27:54.422322 3324089 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-3317687/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19319-3317687/.minikube/cert.pem (1123 bytes)
	I0723 14:27:54.422388 3324089 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-3317687/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19319-3317687/.minikube/key.pem (1679 bytes)
	I0723 14:27:54.422442 3324089 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19319-3317687/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19319-3317687/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19319-3317687/.minikube/certs/ca-key.pem org=jenkins.addons-140056 san=[127.0.0.1 192.168.49.2 addons-140056 localhost minikube]
	I0723 14:27:54.769995 3324089 provision.go:177] copyRemoteCerts
	I0723 14:27:54.770070 3324089 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0723 14:27:54.770113 3324089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-140056
	I0723 14:27:54.786286 3324089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37152 SSHKeyPath:/home/jenkins/minikube-integration/19319-3317687/.minikube/machines/addons-140056/id_rsa Username:docker}
	I0723 14:27:54.875157 3324089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3317687/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0723 14:27:54.899396 3324089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3317687/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0723 14:27:54.922996 3324089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3317687/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0723 14:27:54.946782 3324089 provision.go:87] duration metric: took 540.018184ms to configureAuth
	I0723 14:27:54.946812 3324089 ubuntu.go:193] setting minikube options for container-runtime
	I0723 14:27:54.946997 3324089 config.go:182] Loaded profile config "addons-140056": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 14:27:54.947116 3324089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-140056
	I0723 14:27:54.962770 3324089 main.go:141] libmachine: Using SSH client type: native
	I0723 14:27:54.963002 3324089 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 37152 <nil> <nil>}
	I0723 14:27:54.963023 3324089 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0723 14:27:55.190953 3324089 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0723 14:27:55.191039 3324089 machine.go:97] duration metric: took 1.300690298s to provisionDockerMachine
	I0723 14:27:55.191064 3324089 client.go:171] duration metric: took 9.369174116s to LocalClient.Create
	I0723 14:27:55.191116 3324089 start.go:167] duration metric: took 9.369276057s to libmachine.API.Create "addons-140056"
	I0723 14:27:55.191142 3324089 start.go:293] postStartSetup for "addons-140056" (driver="docker")
	I0723 14:27:55.191169 3324089 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0723 14:27:55.191274 3324089 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0723 14:27:55.191367 3324089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-140056
	I0723 14:27:55.207728 3324089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37152 SSHKeyPath:/home/jenkins/minikube-integration/19319-3317687/.minikube/machines/addons-140056/id_rsa Username:docker}
	I0723 14:27:55.299546 3324089 ssh_runner.go:195] Run: cat /etc/os-release
	I0723 14:27:55.302604 3324089 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0723 14:27:55.302643 3324089 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0723 14:27:55.302654 3324089 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0723 14:27:55.302661 3324089 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0723 14:27:55.302672 3324089 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-3317687/.minikube/addons for local assets ...
	I0723 14:27:55.302746 3324089 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-3317687/.minikube/files for local assets ...
	I0723 14:27:55.302773 3324089 start.go:296] duration metric: took 111.60936ms for postStartSetup
	I0723 14:27:55.303102 3324089 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-140056
	I0723 14:27:55.318688 3324089 profile.go:143] Saving config to /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/config.json ...
	I0723 14:27:55.318990 3324089 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0723 14:27:55.319042 3324089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-140056
	I0723 14:27:55.335150 3324089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37152 SSHKeyPath:/home/jenkins/minikube-integration/19319-3317687/.minikube/machines/addons-140056/id_rsa Username:docker}
	I0723 14:27:55.419277 3324089 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0723 14:27:55.423838 3324089 start.go:128] duration metric: took 9.604340903s to createHost
	I0723 14:27:55.423861 3324089 start.go:83] releasing machines lock for "addons-140056", held for 9.604491675s
	I0723 14:27:55.423934 3324089 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-140056
	I0723 14:27:55.439581 3324089 ssh_runner.go:195] Run: cat /version.json
	I0723 14:27:55.439654 3324089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-140056
	I0723 14:27:55.439945 3324089 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0723 14:27:55.440013 3324089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-140056
	I0723 14:27:55.456700 3324089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37152 SSHKeyPath:/home/jenkins/minikube-integration/19319-3317687/.minikube/machines/addons-140056/id_rsa Username:docker}
	I0723 14:27:55.464098 3324089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37152 SSHKeyPath:/home/jenkins/minikube-integration/19319-3317687/.minikube/machines/addons-140056/id_rsa Username:docker}
	I0723 14:27:55.546266 3324089 ssh_runner.go:195] Run: systemctl --version
	I0723 14:27:55.675664 3324089 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0723 14:27:55.816283 3324089 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0723 14:27:55.820584 3324089 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0723 14:27:55.840953 3324089 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0723 14:27:55.841038 3324089 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0723 14:27:55.870687 3324089 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0723 14:27:55.870712 3324089 start.go:495] detecting cgroup driver to use...
	I0723 14:27:55.870745 3324089 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0723 14:27:55.870803 3324089 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0723 14:27:55.886934 3324089 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0723 14:27:55.899267 3324089 docker.go:217] disabling cri-docker service (if available) ...
	I0723 14:27:55.899376 3324089 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0723 14:27:55.912780 3324089 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0723 14:27:55.927522 3324089 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0723 14:27:56.009262 3324089 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0723 14:27:56.098982 3324089 docker.go:233] disabling docker service ...
	I0723 14:27:56.099058 3324089 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0723 14:27:56.119778 3324089 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0723 14:27:56.131896 3324089 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0723 14:27:56.223929 3324089 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0723 14:27:56.323672 3324089 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0723 14:27:56.335627 3324089 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0723 14:27:56.351927 3324089 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0723 14:27:56.351998 3324089 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:27:56.361751 3324089 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0723 14:27:56.361889 3324089 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:27:56.372082 3324089 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:27:56.381803 3324089 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:27:56.391549 3324089 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0723 14:27:56.400775 3324089 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:27:56.410496 3324089 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:27:56.425884 3324089 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 14:27:56.435819 3324089 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0723 14:27:56.444410 3324089 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0723 14:27:56.452954 3324089 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 14:27:56.531977 3324089 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0723 14:27:56.644516 3324089 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0723 14:27:56.644650 3324089 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0723 14:27:56.648877 3324089 start.go:563] Will wait 60s for crictl version
	I0723 14:27:56.648962 3324089 ssh_runner.go:195] Run: which crictl
	I0723 14:27:56.652551 3324089 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0723 14:27:56.696220 3324089 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0723 14:27:56.696331 3324089 ssh_runner.go:195] Run: crio --version
	I0723 14:27:56.738494 3324089 ssh_runner.go:195] Run: crio --version
	I0723 14:27:56.780501 3324089 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.24.6 ...
	I0723 14:27:56.782339 3324089 cli_runner.go:164] Run: docker network inspect addons-140056 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0723 14:27:56.798501 3324089 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0723 14:27:56.801908 3324089 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0723 14:27:56.812808 3324089 kubeadm.go:883] updating cluster {Name:addons-140056 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-140056 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0723 14:27:56.812936 3324089 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0723 14:27:56.812995 3324089 ssh_runner.go:195] Run: sudo crictl images --output json
	I0723 14:27:56.891655 3324089 crio.go:514] all images are preloaded for cri-o runtime.
	I0723 14:27:56.891683 3324089 crio.go:433] Images already preloaded, skipping extraction
	I0723 14:27:56.891747 3324089 ssh_runner.go:195] Run: sudo crictl images --output json
	I0723 14:27:56.931318 3324089 crio.go:514] all images are preloaded for cri-o runtime.
	I0723 14:27:56.931338 3324089 cache_images.go:84] Images are preloaded, skipping loading
	I0723 14:27:56.931346 3324089 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.30.3 crio true true} ...
	I0723 14:27:56.931462 3324089 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-140056 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:addons-140056 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0723 14:27:56.931559 3324089 ssh_runner.go:195] Run: crio config
	I0723 14:27:56.979758 3324089 cni.go:84] Creating CNI manager for ""
	I0723 14:27:56.979786 3324089 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0723 14:27:56.979802 3324089 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0723 14:27:56.979827 3324089 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-140056 NodeName:addons-140056 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0723 14:27:56.979976 3324089 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-140056"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0723 14:27:56.980050 3324089 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0723 14:27:56.989213 3324089 binaries.go:44] Found k8s binaries, skipping transfer
	I0723 14:27:56.989297 3324089 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0723 14:27:56.998154 3324089 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0723 14:27:57.017400 3324089 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0723 14:27:57.036331 3324089 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0723 14:27:57.055318 3324089 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0723 14:27:57.058990 3324089 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0723 14:27:57.070178 3324089 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 14:27:57.153158 3324089 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0723 14:27:57.167266 3324089 certs.go:68] Setting up /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056 for IP: 192.168.49.2
	I0723 14:27:57.167288 3324089 certs.go:194] generating shared ca certs ...
	I0723 14:27:57.167304 3324089 certs.go:226] acquiring lock for ca certs: {Name:mk9061483da1430ff0fd8e32bc77025286e53111 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 14:27:57.168259 3324089 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19319-3317687/.minikube/ca.key
	I0723 14:27:57.481023 3324089 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19319-3317687/.minikube/ca.crt ...
	I0723 14:27:57.481097 3324089 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-3317687/.minikube/ca.crt: {Name:mkac5e6ee201c918e9f6812b3f036372d7b91909 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 14:27:57.481333 3324089 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19319-3317687/.minikube/ca.key ...
	I0723 14:27:57.481368 3324089 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-3317687/.minikube/ca.key: {Name:mk5044d99e3911a26057aa19d541ef688454b0bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 14:27:57.481508 3324089 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19319-3317687/.minikube/proxy-client-ca.key
	I0723 14:27:57.898965 3324089 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19319-3317687/.minikube/proxy-client-ca.crt ...
	I0723 14:27:57.899001 3324089 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-3317687/.minikube/proxy-client-ca.crt: {Name:mk52321785175ef4f7dd53b6748c34de00ade795 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 14:27:57.899224 3324089 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19319-3317687/.minikube/proxy-client-ca.key ...
	I0723 14:27:57.899239 3324089 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-3317687/.minikube/proxy-client-ca.key: {Name:mkf4f18e7a143e31fd6ffcc2466f4c28bfc32125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 14:27:57.899325 3324089 certs.go:256] generating profile certs ...
	I0723 14:27:57.899382 3324089 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/client.key
	I0723 14:27:57.899401 3324089 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/client.crt with IP's: []
	I0723 14:27:58.253169 3324089 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/client.crt ...
	I0723 14:27:58.253202 3324089 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/client.crt: {Name:mk3cb3b40a01ee6617f9deb0f299f2b0ed1c6ffa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 14:27:58.254041 3324089 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/client.key ...
	I0723 14:27:58.254057 3324089 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/client.key: {Name:mk65719a9b4f855f11034b945737dea15d736bd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 14:27:58.254152 3324089 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/apiserver.key.a9e22fa1
	I0723 14:27:58.254176 3324089 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/apiserver.crt.a9e22fa1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0723 14:27:58.395859 3324089 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/apiserver.crt.a9e22fa1 ...
	I0723 14:27:58.395891 3324089 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/apiserver.crt.a9e22fa1: {Name:mk77e7ff46e1d6669d88968c50a47abce2a5fb2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 14:27:58.396070 3324089 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/apiserver.key.a9e22fa1 ...
	I0723 14:27:58.396083 3324089 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/apiserver.key.a9e22fa1: {Name:mkfd1c7c65c3fb36f2123a3171695c1c8765d629 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 14:27:58.396166 3324089 certs.go:381] copying /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/apiserver.crt.a9e22fa1 -> /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/apiserver.crt
	I0723 14:27:58.396248 3324089 certs.go:385] copying /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/apiserver.key.a9e22fa1 -> /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/apiserver.key
	I0723 14:27:58.396306 3324089 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/proxy-client.key
	I0723 14:27:58.396323 3324089 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/proxy-client.crt with IP's: []
	I0723 14:27:58.660745 3324089 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/proxy-client.crt ...
	I0723 14:27:58.660778 3324089 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/proxy-client.crt: {Name:mk8b5193b638b3cd0f127ce6a2cfa785ff40ec62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 14:27:58.661544 3324089 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/proxy-client.key ...
	I0723 14:27:58.661562 3324089 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/proxy-client.key: {Name:mk19e0c6dc1a71614c2ec1d64282d70726deeb4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 14:27:58.661762 3324089 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-3317687/.minikube/certs/ca-key.pem (1679 bytes)
	I0723 14:27:58.661808 3324089 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-3317687/.minikube/certs/ca.pem (1082 bytes)
	I0723 14:27:58.661834 3324089 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-3317687/.minikube/certs/cert.pem (1123 bytes)
	I0723 14:27:58.661861 3324089 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-3317687/.minikube/certs/key.pem (1679 bytes)
	I0723 14:27:58.662461 3324089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3317687/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0723 14:27:58.687376 3324089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3317687/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0723 14:27:58.711841 3324089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3317687/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0723 14:27:58.735943 3324089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3317687/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0723 14:27:58.760301 3324089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0723 14:27:58.784325 3324089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0723 14:27:58.808116 3324089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0723 14:27:58.832466 3324089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0723 14:27:58.861271 3324089 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3317687/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0723 14:27:58.891173 3324089 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0723 14:27:58.909318 3324089 ssh_runner.go:195] Run: openssl version
	I0723 14:27:58.914927 3324089 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0723 14:27:58.924118 3324089 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0723 14:27:58.927711 3324089 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 23 14:27 /usr/share/ca-certificates/minikubeCA.pem
	I0723 14:27:58.927775 3324089 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0723 14:27:58.934309 3324089 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0723 14:27:58.943727 3324089 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0723 14:27:58.946959 3324089 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0723 14:27:58.947006 3324089 kubeadm.go:392] StartCluster: {Name:addons-140056 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-140056 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 14:27:58.947099 3324089 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0723 14:27:58.947162 3324089 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0723 14:27:58.995350 3324089 cri.go:89] found id: ""
	I0723 14:27:58.995428 3324089 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0723 14:27:59.008656 3324089 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0723 14:27:59.017679 3324089 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0723 14:27:59.017776 3324089 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0723 14:27:59.026863 3324089 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0723 14:27:59.026887 3324089 kubeadm.go:157] found existing configuration files:
	
	I0723 14:27:59.026941 3324089 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0723 14:27:59.036137 3324089 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0723 14:27:59.036253 3324089 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0723 14:27:59.045064 3324089 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0723 14:27:59.053749 3324089 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0723 14:27:59.053833 3324089 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0723 14:27:59.062387 3324089 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0723 14:27:59.070863 3324089 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0723 14:27:59.070933 3324089 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0723 14:27:59.079291 3324089 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0723 14:27:59.088081 3324089 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0723 14:27:59.088151 3324089 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0723 14:27:59.097802 3324089 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0723 14:27:59.141713 3324089 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0723 14:27:59.141914 3324089 kubeadm.go:310] [preflight] Running pre-flight checks
	I0723 14:27:59.189397 3324089 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0723 14:27:59.189469 3324089 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1065-aws
	I0723 14:27:59.189511 3324089 kubeadm.go:310] OS: Linux
	I0723 14:27:59.189561 3324089 kubeadm.go:310] CGROUPS_CPU: enabled
	I0723 14:27:59.189611 3324089 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0723 14:27:59.189663 3324089 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0723 14:27:59.189712 3324089 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0723 14:27:59.189761 3324089 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0723 14:27:59.189815 3324089 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0723 14:27:59.189861 3324089 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0723 14:27:59.189910 3324089 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0723 14:27:59.189959 3324089 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0723 14:27:59.259922 3324089 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0723 14:27:59.260097 3324089 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0723 14:27:59.260228 3324089 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0723 14:27:59.493849 3324089 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0723 14:27:59.497773 3324089 out.go:204]   - Generating certificates and keys ...
	I0723 14:27:59.497865 3324089 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0723 14:27:59.497933 3324089 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0723 14:28:00.574154 3324089 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0723 14:28:00.746368 3324089 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0723 14:28:00.962272 3324089 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0723 14:28:01.467846 3324089 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0723 14:28:01.787231 3324089 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0723 14:28:01.787688 3324089 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-140056 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0723 14:28:02.873810 3324089 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0723 14:28:02.874161 3324089 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-140056 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0723 14:28:03.397669 3324089 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0723 14:28:03.631357 3324089 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0723 14:28:03.960433 3324089 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0723 14:28:03.960732 3324089 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0723 14:28:05.151975 3324089 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0723 14:28:05.397886 3324089 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0723 14:28:06.555296 3324089 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0723 14:28:07.058194 3324089 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0723 14:28:07.338075 3324089 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0723 14:28:07.338724 3324089 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0723 14:28:07.341663 3324089 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0723 14:28:07.343973 3324089 out.go:204]   - Booting up control plane ...
	I0723 14:28:07.344076 3324089 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0723 14:28:07.344156 3324089 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0723 14:28:07.345097 3324089 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0723 14:28:07.356266 3324089 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0723 14:28:07.357308 3324089 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0723 14:28:07.357541 3324089 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0723 14:28:07.451910 3324089 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0723 14:28:07.452000 3324089 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0723 14:28:08.953519 3324089 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.501676588s
	I0723 14:28:08.953613 3324089 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0723 14:28:14.455140 3324089 kubeadm.go:310] [api-check] The API server is healthy after 5.501597014s
	I0723 14:28:14.475262 3324089 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0723 14:28:14.488857 3324089 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0723 14:28:14.510149 3324089 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0723 14:28:14.510349 3324089 kubeadm.go:310] [mark-control-plane] Marking the node addons-140056 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0723 14:28:14.521093 3324089 kubeadm.go:310] [bootstrap-token] Using token: msqdak.i15bkzpxmuc2bwv4
	I0723 14:28:14.523052 3324089 out.go:204]   - Configuring RBAC rules ...
	I0723 14:28:14.523187 3324089 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0723 14:28:14.528276 3324089 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0723 14:28:14.536865 3324089 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0723 14:28:14.543238 3324089 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0723 14:28:14.547350 3324089 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0723 14:28:14.552380 3324089 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0723 14:28:14.863945 3324089 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0723 14:28:15.307264 3324089 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0723 14:28:15.862819 3324089 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0723 14:28:15.863992 3324089 kubeadm.go:310] 
	I0723 14:28:15.864071 3324089 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0723 14:28:15.864088 3324089 kubeadm.go:310] 
	I0723 14:28:15.864164 3324089 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0723 14:28:15.864172 3324089 kubeadm.go:310] 
	I0723 14:28:15.864197 3324089 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0723 14:28:15.864258 3324089 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0723 14:28:15.864310 3324089 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0723 14:28:15.864319 3324089 kubeadm.go:310] 
	I0723 14:28:15.864370 3324089 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0723 14:28:15.864377 3324089 kubeadm.go:310] 
	I0723 14:28:15.864423 3324089 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0723 14:28:15.864431 3324089 kubeadm.go:310] 
	I0723 14:28:15.864481 3324089 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0723 14:28:15.864558 3324089 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0723 14:28:15.864627 3324089 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0723 14:28:15.864634 3324089 kubeadm.go:310] 
	I0723 14:28:15.864716 3324089 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0723 14:28:15.864794 3324089 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0723 14:28:15.864802 3324089 kubeadm.go:310] 
	I0723 14:28:15.864883 3324089 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token msqdak.i15bkzpxmuc2bwv4 \
	I0723 14:28:15.864985 3324089 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d2fc8c293f7a91921409feabe0671bea5964c21341b2c1e458fbfaf2884181ca \
	I0723 14:28:15.865009 3324089 kubeadm.go:310] 	--control-plane 
	I0723 14:28:15.865014 3324089 kubeadm.go:310] 
	I0723 14:28:15.865096 3324089 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0723 14:28:15.865104 3324089 kubeadm.go:310] 
	I0723 14:28:15.865183 3324089 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token msqdak.i15bkzpxmuc2bwv4 \
	I0723 14:28:15.865284 3324089 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d2fc8c293f7a91921409feabe0671bea5964c21341b2c1e458fbfaf2884181ca 
	I0723 14:28:15.868894 3324089 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1065-aws\n", err: exit status 1
	I0723 14:28:15.869025 3324089 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0723 14:28:15.869043 3324089 cni.go:84] Creating CNI manager for ""
	I0723 14:28:15.869050 3324089 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0723 14:28:15.871439 3324089 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0723 14:28:15.873346 3324089 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0723 14:28:15.877424 3324089 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0723 14:28:15.877442 3324089 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0723 14:28:15.895673 3324089 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0723 14:28:16.194145 3324089 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0723 14:28:16.194286 3324089 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:28:16.194377 3324089 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-140056 minikube.k8s.io/updated_at=2024_07_23T14_28_16_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8c09ef4366b737aec8aaa2bbc590bde27da814a6 minikube.k8s.io/name=addons-140056 minikube.k8s.io/primary=true
	I0723 14:28:16.363680 3324089 ops.go:34] apiserver oom_adj: -16
	I0723 14:28:16.363771 3324089 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:28:16.863918 3324089 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:28:17.363973 3324089 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:28:17.864242 3324089 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:28:18.363933 3324089 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:28:18.864687 3324089 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:28:19.364399 3324089 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:28:19.863936 3324089 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:28:20.364712 3324089 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:28:20.863934 3324089 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:28:21.363990 3324089 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:28:21.863981 3324089 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:28:22.364258 3324089 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:28:22.863924 3324089 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:28:23.364531 3324089 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:28:23.864638 3324089 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:28:24.364146 3324089 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:28:24.864490 3324089 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:28:25.363949 3324089 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:28:25.864196 3324089 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:28:26.364622 3324089 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:28:26.864505 3324089 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:28:27.364793 3324089 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:28:27.864747 3324089 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:28:28.363877 3324089 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:28:28.864461 3324089 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0723 14:28:28.989454 3324089 kubeadm.go:1113] duration metric: took 12.795219002s to wait for elevateKubeSystemPrivileges
	I0723 14:28:28.989481 3324089 kubeadm.go:394] duration metric: took 30.042478293s to StartCluster
	I0723 14:28:28.989498 3324089 settings.go:142] acquiring lock: {Name:mkc6849065e362533c3a341cb8f31c09fc3ebad1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 14:28:28.990197 3324089 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19319-3317687/kubeconfig
	I0723 14:28:28.990645 3324089 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-3317687/kubeconfig: {Name:mk3abebf3fbbb55a1b61d2bc2eb17945b9b8d937 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 14:28:28.990829 3324089 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0723 14:28:28.990909 3324089 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0723 14:28:28.991153 3324089 config.go:182] Loaded profile config "addons-140056": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 14:28:28.991187 3324089 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0723 14:28:28.991269 3324089 addons.go:69] Setting yakd=true in profile "addons-140056"
	I0723 14:28:28.991294 3324089 addons.go:234] Setting addon yakd=true in "addons-140056"
	I0723 14:28:28.991318 3324089 host.go:66] Checking if "addons-140056" exists ...
	I0723 14:28:28.991756 3324089 cli_runner.go:164] Run: docker container inspect addons-140056 --format={{.State.Status}}
	I0723 14:28:28.992376 3324089 addons.go:69] Setting metrics-server=true in profile "addons-140056"
	I0723 14:28:28.992398 3324089 addons.go:234] Setting addon metrics-server=true in "addons-140056"
	I0723 14:28:28.992423 3324089 host.go:66] Checking if "addons-140056" exists ...
	I0723 14:28:28.992822 3324089 cli_runner.go:164] Run: docker container inspect addons-140056 --format={{.State.Status}}
	I0723 14:28:28.993981 3324089 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-140056"
	I0723 14:28:28.995670 3324089 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-140056"
	I0723 14:28:28.995922 3324089 host.go:66] Checking if "addons-140056" exists ...
	I0723 14:28:28.996792 3324089 out.go:177] * Verifying Kubernetes components...
	I0723 14:28:28.995471 3324089 addons.go:69] Setting cloud-spanner=true in profile "addons-140056"
	I0723 14:28:28.997453 3324089 addons.go:234] Setting addon cloud-spanner=true in "addons-140056"
	I0723 14:28:28.997512 3324089 host.go:66] Checking if "addons-140056" exists ...
	I0723 14:28:28.997881 3324089 cli_runner.go:164] Run: docker container inspect addons-140056 --format={{.State.Status}}
	I0723 14:28:28.995480 3324089 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-140056"
	I0723 14:28:29.003254 3324089 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-140056"
	I0723 14:28:29.003333 3324089 host.go:66] Checking if "addons-140056" exists ...
	I0723 14:28:29.003883 3324089 cli_runner.go:164] Run: docker container inspect addons-140056 --format={{.State.Status}}
	I0723 14:28:29.004615 3324089 cli_runner.go:164] Run: docker container inspect addons-140056 --format={{.State.Status}}
	I0723 14:28:29.005228 3324089 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 14:28:28.995493 3324089 addons.go:69] Setting default-storageclass=true in profile "addons-140056"
	I0723 14:28:29.010628 3324089 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-140056"
	I0723 14:28:29.011003 3324089 cli_runner.go:164] Run: docker container inspect addons-140056 --format={{.State.Status}}
	I0723 14:28:28.995499 3324089 addons.go:69] Setting gcp-auth=true in profile "addons-140056"
	I0723 14:28:29.018870 3324089 mustload.go:65] Loading cluster: addons-140056
	I0723 14:28:29.019063 3324089 config.go:182] Loaded profile config "addons-140056": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 14:28:29.019318 3324089 cli_runner.go:164] Run: docker container inspect addons-140056 --format={{.State.Status}}
	I0723 14:28:28.995509 3324089 addons.go:69] Setting ingress=true in profile "addons-140056"
	I0723 14:28:29.020470 3324089 addons.go:234] Setting addon ingress=true in "addons-140056"
	I0723 14:28:29.020514 3324089 host.go:66] Checking if "addons-140056" exists ...
	I0723 14:28:29.020907 3324089 cli_runner.go:164] Run: docker container inspect addons-140056 --format={{.State.Status}}
	I0723 14:28:28.995516 3324089 addons.go:69] Setting ingress-dns=true in profile "addons-140056"
	I0723 14:28:29.030654 3324089 addons.go:234] Setting addon ingress-dns=true in "addons-140056"
	I0723 14:28:29.030711 3324089 host.go:66] Checking if "addons-140056" exists ...
	I0723 14:28:29.031141 3324089 cli_runner.go:164] Run: docker container inspect addons-140056 --format={{.State.Status}}
	I0723 14:28:28.995621 3324089 addons.go:69] Setting inspektor-gadget=true in profile "addons-140056"
	I0723 14:28:29.067255 3324089 addons.go:234] Setting addon inspektor-gadget=true in "addons-140056"
	I0723 14:28:29.067371 3324089 host.go:66] Checking if "addons-140056" exists ...
	I0723 14:28:29.069707 3324089 cli_runner.go:164] Run: docker container inspect addons-140056 --format={{.State.Status}}
	I0723 14:28:28.995820 3324089 addons.go:69] Setting volcano=true in profile "addons-140056"
	I0723 14:28:29.082299 3324089 addons.go:234] Setting addon volcano=true in "addons-140056"
	I0723 14:28:29.082344 3324089 host.go:66] Checking if "addons-140056" exists ...
	I0723 14:28:29.083589 3324089 cli_runner.go:164] Run: docker container inspect addons-140056 --format={{.State.Status}}
	I0723 14:28:28.995830 3324089 addons.go:69] Setting registry=true in profile "addons-140056"
	I0723 14:28:29.102198 3324089 addons.go:234] Setting addon registry=true in "addons-140056"
	I0723 14:28:29.102235 3324089 host.go:66] Checking if "addons-140056" exists ...
	I0723 14:28:29.102762 3324089 cli_runner.go:164] Run: docker container inspect addons-140056 --format={{.State.Status}}
	I0723 14:28:29.104541 3324089 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0723 14:28:29.112022 3324089 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0723 14:28:29.112105 3324089 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0723 14:28:29.112207 3324089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-140056
	I0723 14:28:28.995837 3324089 addons.go:69] Setting storage-provisioner=true in profile "addons-140056"
	I0723 14:28:29.128635 3324089 addons.go:234] Setting addon storage-provisioner=true in "addons-140056"
	I0723 14:28:29.128679 3324089 host.go:66] Checking if "addons-140056" exists ...
	I0723 14:28:29.129116 3324089 cli_runner.go:164] Run: docker container inspect addons-140056 --format={{.State.Status}}
	I0723 14:28:28.995843 3324089 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-140056"
	I0723 14:28:29.138152 3324089 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-140056"
	I0723 14:28:29.138472 3324089 cli_runner.go:164] Run: docker container inspect addons-140056 --format={{.State.Status}}
	I0723 14:28:28.995868 3324089 addons.go:69] Setting volumesnapshots=true in profile "addons-140056"
	I0723 14:28:29.138610 3324089 addons.go:234] Setting addon volumesnapshots=true in "addons-140056"
	I0723 14:28:29.138644 3324089 host.go:66] Checking if "addons-140056" exists ...
	I0723 14:28:29.138999 3324089 cli_runner.go:164] Run: docker container inspect addons-140056 --format={{.State.Status}}
	I0723 14:28:29.139387 3324089 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0723 14:28:29.162926 3324089 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0723 14:28:29.162948 3324089 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0723 14:28:29.163014 3324089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-140056
	I0723 14:28:29.194749 3324089 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0723 14:28:29.203097 3324089 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0723 14:28:29.203177 3324089 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0723 14:28:29.203272 3324089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-140056
	I0723 14:28:29.212431 3324089 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0723 14:28:29.215493 3324089 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0723 14:28:29.220821 3324089 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0723 14:28:29.227322 3324089 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0723 14:28:29.240237 3324089 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0723 14:28:29.250016 3324089 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0723 14:28:29.250293 3324089 host.go:66] Checking if "addons-140056" exists ...
	I0723 14:28:29.261846 3324089 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.0
	I0723 14:28:29.267377 3324089 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0723 14:28:29.267453 3324089 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0723 14:28:29.267568 3324089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-140056
	I0723 14:28:29.304017 3324089 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0723 14:28:29.308240 3324089 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0723 14:28:29.309955 3324089 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0723 14:28:29.309976 3324089 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0723 14:28:29.310039 3324089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-140056
	I0723 14:28:29.318977 3324089 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0723 14:28:29.324221 3324089 addons.go:234] Setting addon default-storageclass=true in "addons-140056"
	I0723 14:28:29.324263 3324089 host.go:66] Checking if "addons-140056" exists ...
	I0723 14:28:29.324668 3324089 cli_runner.go:164] Run: docker container inspect addons-140056 --format={{.State.Status}}
	I0723 14:28:29.332349 3324089 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0723 14:28:29.332372 3324089 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0723 14:28:29.332443 3324089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-140056
	I0723 14:28:29.335996 3324089 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0723 14:28:29.343955 3324089 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0723 14:28:29.347078 3324089 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0723 14:28:29.348122 3324089 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	W0723 14:28:29.347350 3324089 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0723 14:28:29.349451 3324089 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0723 14:28:29.349469 3324089 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0723 14:28:29.349535 3324089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-140056
	I0723 14:28:29.355130 3324089 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0723 14:28:29.355154 3324089 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0723 14:28:29.355259 3324089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-140056
	I0723 14:28:29.379556 3324089 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-140056"
	I0723 14:28:29.379602 3324089 host.go:66] Checking if "addons-140056" exists ...
	I0723 14:28:29.379713 3324089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37152 SSHKeyPath:/home/jenkins/minikube-integration/19319-3317687/.minikube/machines/addons-140056/id_rsa Username:docker}
	I0723 14:28:29.379988 3324089 cli_runner.go:164] Run: docker container inspect addons-140056 --format={{.State.Status}}
	I0723 14:28:29.381682 3324089 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0723 14:28:29.386048 3324089 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0723 14:28:29.386068 3324089 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0723 14:28:29.386133 3324089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-140056
	I0723 14:28:29.405683 3324089 out.go:177]   - Using image docker.io/registry:2.8.3
	I0723 14:28:29.410571 3324089 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0723 14:28:29.412537 3324089 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0723 14:28:29.412559 3324089 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0723 14:28:29.412625 3324089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-140056
	I0723 14:28:29.423015 3324089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37152 SSHKeyPath:/home/jenkins/minikube-integration/19319-3317687/.minikube/machines/addons-140056/id_rsa Username:docker}
	I0723 14:28:29.425565 3324089 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0723 14:28:29.430241 3324089 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0723 14:28:29.430264 3324089 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0723 14:28:29.430331 3324089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-140056
	I0723 14:28:29.459051 3324089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37152 SSHKeyPath:/home/jenkins/minikube-integration/19319-3317687/.minikube/machines/addons-140056/id_rsa Username:docker}
	I0723 14:28:29.520715 3324089 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0723 14:28:29.520869 3324089 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0723 14:28:29.531704 3324089 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0723 14:28:29.532100 3324089 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0723 14:28:29.532162 3324089 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0723 14:28:29.532228 3324089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-140056
	I0723 14:28:29.545809 3324089 out.go:177]   - Using image docker.io/busybox:stable
	I0723 14:28:29.548342 3324089 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0723 14:28:29.548405 3324089 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0723 14:28:29.548506 3324089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-140056
	I0723 14:28:29.583962 3324089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37152 SSHKeyPath:/home/jenkins/minikube-integration/19319-3317687/.minikube/machines/addons-140056/id_rsa Username:docker}
	I0723 14:28:29.584517 3324089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37152 SSHKeyPath:/home/jenkins/minikube-integration/19319-3317687/.minikube/machines/addons-140056/id_rsa Username:docker}
	I0723 14:28:29.586330 3324089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37152 SSHKeyPath:/home/jenkins/minikube-integration/19319-3317687/.minikube/machines/addons-140056/id_rsa Username:docker}
	I0723 14:28:29.592116 3324089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37152 SSHKeyPath:/home/jenkins/minikube-integration/19319-3317687/.minikube/machines/addons-140056/id_rsa Username:docker}
	I0723 14:28:29.592866 3324089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37152 SSHKeyPath:/home/jenkins/minikube-integration/19319-3317687/.minikube/machines/addons-140056/id_rsa Username:docker}
	I0723 14:28:29.666961 3324089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37152 SSHKeyPath:/home/jenkins/minikube-integration/19319-3317687/.minikube/machines/addons-140056/id_rsa Username:docker}
	I0723 14:28:29.676082 3324089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37152 SSHKeyPath:/home/jenkins/minikube-integration/19319-3317687/.minikube/machines/addons-140056/id_rsa Username:docker}
	I0723 14:28:29.676501 3324089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37152 SSHKeyPath:/home/jenkins/minikube-integration/19319-3317687/.minikube/machines/addons-140056/id_rsa Username:docker}
	I0723 14:28:29.694746 3324089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37152 SSHKeyPath:/home/jenkins/minikube-integration/19319-3317687/.minikube/machines/addons-140056/id_rsa Username:docker}
	I0723 14:28:29.695223 3324089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37152 SSHKeyPath:/home/jenkins/minikube-integration/19319-3317687/.minikube/machines/addons-140056/id_rsa Username:docker}
	I0723 14:28:29.761473 3324089 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0723 14:28:29.761492 3324089 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0723 14:28:29.811343 3324089 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0723 14:28:29.811364 3324089 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0723 14:28:29.919009 3324089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0723 14:28:29.937213 3324089 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0723 14:28:29.937274 3324089 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0723 14:28:29.972391 3324089 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0723 14:28:29.972412 3324089 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0723 14:28:29.987814 3324089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0723 14:28:30.031105 3324089 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0723 14:28:30.031197 3324089 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0723 14:28:30.074597 3324089 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0723 14:28:30.074676 3324089 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0723 14:28:30.082860 3324089 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0723 14:28:30.082943 3324089 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0723 14:28:30.128176 3324089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0723 14:28:30.134137 3324089 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0723 14:28:30.134214 3324089 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0723 14:28:30.162448 3324089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0723 14:28:30.226257 3324089 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0723 14:28:30.226328 3324089 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0723 14:28:30.236728 3324089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0723 14:28:30.242414 3324089 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0723 14:28:30.242492 3324089 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0723 14:28:30.262736 3324089 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0723 14:28:30.262808 3324089 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0723 14:28:30.266336 3324089 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0723 14:28:30.266404 3324089 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0723 14:28:30.300768 3324089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0723 14:28:30.308266 3324089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0723 14:28:30.312353 3324089 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0723 14:28:30.312424 3324089 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0723 14:28:30.317292 3324089 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0723 14:28:30.317368 3324089 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0723 14:28:30.411842 3324089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0723 14:28:30.426090 3324089 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0723 14:28:30.426358 3324089 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0723 14:28:30.479575 3324089 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0723 14:28:30.479646 3324089 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0723 14:28:30.493134 3324089 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0723 14:28:30.493202 3324089 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0723 14:28:30.500594 3324089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0723 14:28:30.515897 3324089 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0723 14:28:30.515975 3324089 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0723 14:28:30.624533 3324089 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0723 14:28:30.624555 3324089 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0723 14:28:30.678004 3324089 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0723 14:28:30.678025 3324089 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0723 14:28:30.701028 3324089 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0723 14:28:30.701054 3324089 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0723 14:28:30.708150 3324089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0723 14:28:30.813079 3324089 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0723 14:28:30.813153 3324089 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0723 14:28:30.814763 3324089 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0723 14:28:30.814823 3324089 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0723 14:28:30.887186 3324089 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0723 14:28:30.887259 3324089 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0723 14:28:30.951499 3324089 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0723 14:28:30.951571 3324089 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0723 14:28:30.952708 3324089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0723 14:28:30.995086 3324089 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0723 14:28:30.995167 3324089 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0723 14:28:31.014808 3324089 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0723 14:28:31.014881 3324089 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0723 14:28:31.087363 3324089 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0723 14:28:31.087446 3324089 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0723 14:28:31.114206 3324089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0723 14:28:31.179409 3324089 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0723 14:28:31.179482 3324089 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0723 14:28:31.356677 3324089 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0723 14:28:31.356753 3324089 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0723 14:28:31.474028 3324089 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0723 14:28:31.474103 3324089 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0723 14:28:31.581307 3324089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0723 14:28:32.043188 3324089 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.522438345s)
	I0723 14:28:32.043264 3324089 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0723 14:28:32.043430 3324089 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.522546441s)
	I0723 14:28:32.044889 3324089 node_ready.go:35] waiting up to 6m0s for node "addons-140056" to be "Ready" ...
	I0723 14:28:33.747896 3324089 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-140056" context rescaled to 1 replicas
	I0723 14:28:34.224159 3324089 node_ready.go:53] node "addons-140056" has status "Ready":"False"
	I0723 14:28:34.295904 3324089 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.376820362s)
	I0723 14:28:34.295956 3324089 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.308124387s)
	I0723 14:28:34.770317 3324089 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.642063063s)
	I0723 14:28:34.770390 3324089 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.607873289s)
	I0723 14:28:35.654129 3324089 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.417320083s)
	I0723 14:28:35.654165 3324089 addons.go:475] Verifying addon ingress=true in "addons-140056"
	I0723 14:28:35.654322 3324089 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.353479018s)
	I0723 14:28:35.654614 3324089 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.346284987s)
	I0723 14:28:35.654711 3324089 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.242791419s)
	I0723 14:28:35.654729 3324089 addons.go:475] Verifying addon metrics-server=true in "addons-140056"
	I0723 14:28:35.654785 3324089 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.154117857s)
	I0723 14:28:35.654810 3324089 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.946590605s)
	I0723 14:28:35.654822 3324089 addons.go:475] Verifying addon registry=true in "addons-140056"
	I0723 14:28:35.657137 3324089 out.go:177] * Verifying registry addon...
	I0723 14:28:35.657137 3324089 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-140056 service yakd-dashboard -n yakd-dashboard
	
	I0723 14:28:35.657266 3324089 out.go:177] * Verifying ingress addon...
	I0723 14:28:35.660108 3324089 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0723 14:28:35.660993 3324089 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0723 14:28:35.686218 3324089 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0723 14:28:35.686310 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:35.689052 3324089 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0723 14:28:35.689113 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0723 14:28:35.728846 3324089 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0723 14:28:35.859834 3324089 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.907055838s)
	W0723 14:28:35.859892 3324089 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0723 14:28:35.859919 3324089 retry.go:31] will retry after 340.50808ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0723 14:28:35.859992 3324089 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.745711082s)
	I0723 14:28:36.167602 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:36.184086 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:36.201607 3324089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0723 14:28:36.219068 3324089 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.637652027s)
	I0723 14:28:36.219233 3324089 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-140056"
	I0723 14:28:36.221653 3324089 out.go:177] * Verifying csi-hostpath-driver addon...
	I0723 14:28:36.224649 3324089 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0723 14:28:36.258610 3324089 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0723 14:28:36.258719 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:36.574332 3324089 node_ready.go:53] node "addons-140056" has status "Ready":"False"
	I0723 14:28:36.666666 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:36.679392 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:36.741439 3324089 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0723 14:28:36.741516 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:37.168024 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:37.168872 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:37.229332 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:37.665254 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:37.666306 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:37.733599 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:38.165388 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:38.166467 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:38.229916 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:38.520430 3324089 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0723 14:28:38.520534 3324089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-140056
	I0723 14:28:38.540642 3324089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37152 SSHKeyPath:/home/jenkins/minikube-integration/19319-3317687/.minikube/machines/addons-140056/id_rsa Username:docker}
	I0723 14:28:38.669141 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:38.670083 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:38.673296 3324089 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0723 14:28:38.701159 3324089 addons.go:234] Setting addon gcp-auth=true in "addons-140056"
	I0723 14:28:38.701268 3324089 host.go:66] Checking if "addons-140056" exists ...
	I0723 14:28:38.701753 3324089 cli_runner.go:164] Run: docker container inspect addons-140056 --format={{.State.Status}}
	I0723 14:28:38.732720 3324089 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0723 14:28:38.732780 3324089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-140056
	I0723 14:28:38.753798 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:38.761351 3324089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37152 SSHKeyPath:/home/jenkins/minikube-integration/19319-3317687/.minikube/machines/addons-140056/id_rsa Username:docker}
	I0723 14:28:39.048849 3324089 node_ready.go:53] node "addons-140056" has status "Ready":"False"
	I0723 14:28:39.165944 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:39.169123 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:39.232268 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:39.351737 3324089 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.150032093s)
	I0723 14:28:39.354864 3324089 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0723 14:28:39.357624 3324089 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0723 14:28:39.360110 3324089 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0723 14:28:39.360145 3324089 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0723 14:28:39.386963 3324089 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0723 14:28:39.386989 3324089 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0723 14:28:39.409847 3324089 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0723 14:28:39.409868 3324089 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0723 14:28:39.429228 3324089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0723 14:28:39.665679 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:39.668855 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:39.755233 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:40.095932 3324089 addons.go:475] Verifying addon gcp-auth=true in "addons-140056"
	I0723 14:28:40.098894 3324089 out.go:177] * Verifying gcp-auth addon...
	I0723 14:28:40.103060 3324089 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0723 14:28:40.110068 3324089 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0723 14:28:40.110096 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:28:40.165912 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:40.166968 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:40.229478 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:40.606653 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:28:40.665170 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:40.665819 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:40.732165 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:41.106987 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:28:41.165718 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:41.166114 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:41.228755 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:41.549171 3324089 node_ready.go:53] node "addons-140056" has status "Ready":"False"
	I0723 14:28:41.606517 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:28:41.664302 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:41.666610 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:41.745737 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:42.107460 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:28:42.166379 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:42.167708 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:42.234425 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:42.607561 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:28:42.665139 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:42.665818 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:42.730361 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:43.107239 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:28:43.164223 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:43.165945 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:43.229118 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:43.607319 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:28:43.665202 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:43.665873 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:43.732421 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:44.048964 3324089 node_ready.go:53] node "addons-140056" has status "Ready":"False"
	I0723 14:28:44.107343 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:28:44.164385 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:44.165397 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:44.229551 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:44.607086 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:28:44.665078 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:44.665989 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:44.730358 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:45.107591 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:28:45.166278 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:45.167575 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:45.231331 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:45.606724 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:28:45.664995 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:45.665787 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:45.732895 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:46.106773 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:28:46.165171 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:46.165912 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:46.229399 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:46.547780 3324089 node_ready.go:53] node "addons-140056" has status "Ready":"False"
	I0723 14:28:46.607350 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:28:46.665389 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:46.666303 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:46.731525 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:47.106209 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:28:47.163895 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:47.164884 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:47.229095 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:47.606775 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:28:47.665380 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:47.665692 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:47.731917 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:48.107364 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:28:48.165392 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:48.165913 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:48.231232 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:48.548818 3324089 node_ready.go:53] node "addons-140056" has status "Ready":"False"
	I0723 14:28:48.606371 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:28:48.663730 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:48.664858 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:48.732509 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:49.106165 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:28:49.164058 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:49.164818 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:49.229117 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:49.607330 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:28:49.665401 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:49.666284 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:49.729638 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:50.106807 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:28:50.165362 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:50.165819 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:50.229533 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:50.607636 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:28:50.664117 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:50.665605 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:50.732603 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:51.048839 3324089 node_ready.go:53] node "addons-140056" has status "Ready":"False"
	I0723 14:28:51.106900 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:28:51.165072 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:51.165775 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:51.229311 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:51.606451 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:28:51.664997 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:51.665815 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:51.731796 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:52.106808 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:28:52.164077 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:52.165278 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:52.229558 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:52.607030 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:28:52.664333 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:52.666139 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:52.733276 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:53.048995 3324089 node_ready.go:53] node "addons-140056" has status "Ready":"False"
	I0723 14:28:53.106794 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:28:53.165868 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:53.166197 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:53.229360 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:53.607214 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:28:53.665993 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:53.666208 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:53.731640 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:54.107271 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:28:54.165328 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:54.166364 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:54.229079 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:54.606705 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:28:54.664672 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:54.666352 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:54.730467 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:55.106965 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:28:55.164906 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:55.166502 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:55.229372 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:55.548023 3324089 node_ready.go:53] node "addons-140056" has status "Ready":"False"
	I0723 14:28:55.606864 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:28:55.665603 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:55.666618 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:55.731098 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:56.107162 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:28:56.165652 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:56.166376 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:56.229557 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:56.606713 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:28:56.665178 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:56.666155 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:56.734171 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:57.107078 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:28:57.165179 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:57.165612 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:57.229793 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:57.548711 3324089 node_ready.go:53] node "addons-140056" has status "Ready":"False"
	I0723 14:28:57.606958 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:28:57.664637 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:57.665765 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:57.729742 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:58.106906 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:28:58.165461 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:58.165525 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:58.229658 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:58.609691 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:28:58.669572 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:58.675975 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:58.732754 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:59.106766 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:28:59.164275 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:59.165759 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:59.230804 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:28:59.548907 3324089 node_ready.go:53] node "addons-140056" has status "Ready":"False"
	I0723 14:28:59.606666 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:28:59.664947 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:28:59.667200 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:28:59.731305 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:00.111249 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:00.171082 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:00.172588 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:00.229939 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:00.606675 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:00.664706 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:00.665078 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:00.731104 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:01.106469 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:01.164107 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:01.165999 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:01.229151 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:01.607043 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:01.665707 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:01.666017 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:01.729987 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:02.049504 3324089 node_ready.go:53] node "addons-140056" has status "Ready":"False"
	I0723 14:29:02.106841 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:02.165353 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:02.165892 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:02.228534 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:02.606739 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:02.664844 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:02.665248 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:02.732110 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:03.107204 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:03.165784 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:03.166488 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:03.229182 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:03.607044 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:03.665282 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:03.666260 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:03.731968 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:04.106472 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:04.164198 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:04.165513 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:04.229530 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:04.548715 3324089 node_ready.go:53] node "addons-140056" has status "Ready":"False"
	I0723 14:29:04.606846 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:04.665114 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:04.665553 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:04.730698 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:05.107656 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:05.164517 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:05.165706 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:05.229175 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:05.607378 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:05.664714 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:05.666792 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:05.731569 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:06.107025 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:06.163700 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:06.165357 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:06.229224 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:06.606919 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:06.665290 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:06.665848 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:06.732829 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:07.048225 3324089 node_ready.go:53] node "addons-140056" has status "Ready":"False"
	I0723 14:29:07.107686 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:07.165538 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:07.165988 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:07.229364 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:07.606902 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:07.665906 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:07.666169 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:07.731979 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:08.106294 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:08.164356 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:08.165766 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:08.229657 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:08.606386 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:08.665426 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:08.665867 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:08.732433 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:09.048948 3324089 node_ready.go:53] node "addons-140056" has status "Ready":"False"
	I0723 14:29:09.110145 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:09.165886 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:09.167365 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:09.229640 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:09.607007 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:09.665195 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:09.665769 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:09.731530 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:10.106883 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:10.165353 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:10.165655 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:10.229690 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:10.606443 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:10.664438 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:10.664971 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:10.732580 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:11.107481 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:11.164761 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:11.166934 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:11.230185 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:11.547871 3324089 node_ready.go:53] node "addons-140056" has status "Ready":"False"
	I0723 14:29:11.606939 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:11.665401 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:11.666307 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:11.731554 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:12.106659 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:12.164695 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:12.165566 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:12.229326 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:12.607003 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:12.663802 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:12.664988 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:12.731844 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:13.106263 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:13.163893 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:13.164969 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:13.229471 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:13.548107 3324089 node_ready.go:53] node "addons-140056" has status "Ready":"False"
	I0723 14:29:13.607459 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:13.664842 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:13.665631 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:13.732525 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:14.106483 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:14.165066 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:14.166177 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:14.229209 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:14.606713 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:14.664189 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:14.665923 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:14.732446 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:15.107250 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:15.164492 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:15.166663 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:15.276712 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:15.568390 3324089 node_ready.go:49] node "addons-140056" has status "Ready":"True"
	I0723 14:29:15.568416 3324089 node_ready.go:38] duration metric: took 43.523493377s for node "addons-140056" to be "Ready" ...
	I0723 14:29:15.568427 3324089 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 14:29:15.596495 3324089 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jgz96" in "kube-system" namespace to be "Ready" ...
	I0723 14:29:15.639962 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:15.685723 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:15.688582 3324089 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0723 14:29:15.688607 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:15.739288 3324089 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0723 14:29:15.739317 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:16.106484 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:16.165137 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:16.167380 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:16.234712 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:16.609531 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:16.670334 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:16.671612 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:16.733293 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:17.115223 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:17.182748 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:17.184104 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:17.239656 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:17.604379 3324089 pod_ready.go:102] pod "coredns-7db6d8ff4d-jgz96" in "kube-system" namespace has status "Ready":"False"
	I0723 14:29:17.608174 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:17.666255 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:17.671544 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:17.741769 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:18.106676 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:18.166239 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:18.168200 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:18.230859 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:18.603441 3324089 pod_ready.go:92] pod "coredns-7db6d8ff4d-jgz96" in "kube-system" namespace has status "Ready":"True"
	I0723 14:29:18.603597 3324089 pod_ready.go:81] duration metric: took 3.007065828s for pod "coredns-7db6d8ff4d-jgz96" in "kube-system" namespace to be "Ready" ...
	I0723 14:29:18.603639 3324089 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-140056" in "kube-system" namespace to be "Ready" ...
	I0723 14:29:18.609209 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:18.618987 3324089 pod_ready.go:92] pod "etcd-addons-140056" in "kube-system" namespace has status "Ready":"True"
	I0723 14:29:18.619056 3324089 pod_ready.go:81] duration metric: took 15.397479ms for pod "etcd-addons-140056" in "kube-system" namespace to be "Ready" ...
	I0723 14:29:18.619085 3324089 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-140056" in "kube-system" namespace to be "Ready" ...
	I0723 14:29:18.628850 3324089 pod_ready.go:92] pod "kube-apiserver-addons-140056" in "kube-system" namespace has status "Ready":"True"
	I0723 14:29:18.628920 3324089 pod_ready.go:81] duration metric: took 9.814929ms for pod "kube-apiserver-addons-140056" in "kube-system" namespace to be "Ready" ...
	I0723 14:29:18.628946 3324089 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-140056" in "kube-system" namespace to be "Ready" ...
	I0723 14:29:18.642868 3324089 pod_ready.go:92] pod "kube-controller-manager-addons-140056" in "kube-system" namespace has status "Ready":"True"
	I0723 14:29:18.642940 3324089 pod_ready.go:81] duration metric: took 13.974094ms for pod "kube-controller-manager-addons-140056" in "kube-system" namespace to be "Ready" ...
	I0723 14:29:18.642968 3324089 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qch7m" in "kube-system" namespace to be "Ready" ...
	I0723 14:29:18.650639 3324089 pod_ready.go:92] pod "kube-proxy-qch7m" in "kube-system" namespace has status "Ready":"True"
	I0723 14:29:18.650709 3324089 pod_ready.go:81] duration metric: took 7.720102ms for pod "kube-proxy-qch7m" in "kube-system" namespace to be "Ready" ...
	I0723 14:29:18.650735 3324089 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-140056" in "kube-system" namespace to be "Ready" ...
	I0723 14:29:18.673107 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:18.677199 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:18.753580 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:19.001465 3324089 pod_ready.go:92] pod "kube-scheduler-addons-140056" in "kube-system" namespace has status "Ready":"True"
	I0723 14:29:19.001550 3324089 pod_ready.go:81] duration metric: took 350.794651ms for pod "kube-scheduler-addons-140056" in "kube-system" namespace to be "Ready" ...
	I0723 14:29:19.001577 3324089 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-c59844bb4-ql9z2" in "kube-system" namespace to be "Ready" ...
	I0723 14:29:19.106899 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:19.168194 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:19.169043 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:19.230341 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:19.607103 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:19.667553 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:19.668905 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:19.734397 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:20.107964 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:20.167246 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:20.173380 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:20.230865 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:20.606888 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:20.673106 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:20.674225 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:20.734910 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:21.010344 3324089 pod_ready.go:102] pod "metrics-server-c59844bb4-ql9z2" in "kube-system" namespace has status "Ready":"False"
	I0723 14:29:21.107271 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:21.166094 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:21.167482 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:21.231411 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:21.608086 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:21.666637 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:21.670300 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:21.760672 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:22.109071 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:22.167258 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:22.168751 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:22.229917 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:22.607410 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:22.668695 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:22.670013 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:22.734485 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:23.107395 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:23.166553 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:23.166912 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:23.230597 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:23.508845 3324089 pod_ready.go:102] pod "metrics-server-c59844bb4-ql9z2" in "kube-system" namespace has status "Ready":"False"
	I0723 14:29:23.607434 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:23.665824 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:23.667082 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:23.747057 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:24.106463 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:24.167976 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:24.169441 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:24.231779 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:24.606863 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:24.667796 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:24.669272 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:24.731076 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:25.107050 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:25.168171 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:25.170494 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:25.231943 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:25.607263 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:25.668994 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:25.670447 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:25.733126 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:26.015291 3324089 pod_ready.go:102] pod "metrics-server-c59844bb4-ql9z2" in "kube-system" namespace has status "Ready":"False"
	I0723 14:29:26.108018 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:26.170056 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:26.170921 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:26.231483 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:26.607415 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:26.676839 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:26.677419 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:26.742321 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:27.107537 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:27.170984 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:27.171614 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:27.230400 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:27.607121 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:27.666109 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:27.667633 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:27.742457 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:28.107000 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:28.167033 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:28.168705 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:28.231011 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:28.507681 3324089 pod_ready.go:102] pod "metrics-server-c59844bb4-ql9z2" in "kube-system" namespace has status "Ready":"False"
	I0723 14:29:28.606707 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:28.665455 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:28.667175 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:28.732610 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:29.106975 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:29.164996 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:29.166866 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:29.230740 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:29.606885 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:29.665599 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:29.666718 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:29.735206 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:30.109589 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:30.168940 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:30.171452 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:30.230213 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:30.508928 3324089 pod_ready.go:102] pod "metrics-server-c59844bb4-ql9z2" in "kube-system" namespace has status "Ready":"False"
	I0723 14:29:30.606324 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:30.676509 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:30.678028 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:30.735171 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:31.107557 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:31.166562 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:31.167345 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:31.230364 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:31.607188 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:31.665834 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:31.666981 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:31.733177 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:32.106953 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:32.165879 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:32.166788 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:32.234432 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:32.607205 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:32.665447 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:32.667187 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:32.740633 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:33.011139 3324089 pod_ready.go:102] pod "metrics-server-c59844bb4-ql9z2" in "kube-system" namespace has status "Ready":"False"
	I0723 14:29:33.107046 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:33.168298 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:33.168563 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:33.233264 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:33.607013 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:33.665614 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:33.666728 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:33.730882 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:34.107224 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:34.166632 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:34.167629 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:34.229743 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:34.608947 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:34.670117 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:34.673831 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:34.734053 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:35.011776 3324089 pod_ready.go:102] pod "metrics-server-c59844bb4-ql9z2" in "kube-system" namespace has status "Ready":"False"
	I0723 14:29:35.107644 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:35.190948 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:35.193302 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:35.239003 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:35.607973 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:35.667072 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:35.668454 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:35.733804 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:36.107000 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:36.165478 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:36.167053 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:36.231339 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:36.607041 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:36.674222 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:36.677447 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:36.754255 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:37.108029 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:37.169593 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:37.173669 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:37.230040 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:37.507541 3324089 pod_ready.go:102] pod "metrics-server-c59844bb4-ql9z2" in "kube-system" namespace has status "Ready":"False"
	I0723 14:29:37.606743 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:37.668033 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:37.669174 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:37.734656 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:38.107129 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:38.165147 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:38.167286 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:38.231078 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:38.608032 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:38.666714 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:38.669568 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:38.730487 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:39.107423 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:39.169271 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:39.170778 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:39.230856 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:39.508695 3324089 pod_ready.go:102] pod "metrics-server-c59844bb4-ql9z2" in "kube-system" namespace has status "Ready":"False"
	I0723 14:29:39.606912 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:39.668053 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:39.669523 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:39.746238 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:40.107049 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:40.166942 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:40.167697 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:40.230518 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:40.608450 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:40.665527 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:40.667361 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:40.733400 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:41.108394 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:41.166738 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:41.168708 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:41.230510 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:41.614228 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:41.668501 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:41.669255 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:41.733332 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:42.015525 3324089 pod_ready.go:102] pod "metrics-server-c59844bb4-ql9z2" in "kube-system" namespace has status "Ready":"False"
	I0723 14:29:42.107124 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:42.166839 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:42.168535 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:42.234672 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:42.644861 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:42.688306 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:42.688738 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:42.744457 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:43.106816 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:43.165836 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:43.170317 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:43.231743 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:43.606684 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:43.671241 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:43.671890 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:43.735551 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:44.106809 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:44.166747 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:44.170728 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:44.230025 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:44.507868 3324089 pod_ready.go:102] pod "metrics-server-c59844bb4-ql9z2" in "kube-system" namespace has status "Ready":"False"
	I0723 14:29:44.629047 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:44.666809 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:44.668521 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:44.748015 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:45.107967 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:45.167564 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:45.170486 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:45.232867 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:45.607328 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:45.667026 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:45.668752 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:45.732927 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:46.106430 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:46.166958 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:46.167900 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:46.229770 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:46.508063 3324089 pod_ready.go:102] pod "metrics-server-c59844bb4-ql9z2" in "kube-system" namespace has status "Ready":"False"
	I0723 14:29:46.606658 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:46.669774 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:46.670662 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:46.733301 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:47.107309 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:47.165334 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:47.166130 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:47.231095 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:47.607251 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:47.666718 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:47.669818 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:47.734257 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:48.107973 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:48.173468 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:48.175934 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:48.232526 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:48.514823 3324089 pod_ready.go:102] pod "metrics-server-c59844bb4-ql9z2" in "kube-system" namespace has status "Ready":"False"
	I0723 14:29:48.607639 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:48.666009 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:48.668447 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:48.765942 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:49.107121 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:49.168165 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:49.170788 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:49.231425 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:49.606983 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:49.673241 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:49.674443 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:49.745181 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:50.107351 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:50.167252 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:50.171882 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:50.231058 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:50.613019 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:50.679642 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:50.680861 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:50.741092 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:51.012605 3324089 pod_ready.go:102] pod "metrics-server-c59844bb4-ql9z2" in "kube-system" namespace has status "Ready":"False"
	I0723 14:29:51.107616 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:51.170622 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:51.181087 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:51.231367 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:51.607144 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:51.668862 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:51.669727 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:51.730592 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:52.106771 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:52.167686 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:52.168832 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:52.231020 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:52.608334 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:52.668901 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:52.672755 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:52.737160 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:53.020360 3324089 pod_ready.go:102] pod "metrics-server-c59844bb4-ql9z2" in "kube-system" namespace has status "Ready":"False"
	I0723 14:29:53.107234 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:53.165310 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:53.169939 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:53.230912 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:53.607282 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:53.667645 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:53.668708 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:53.734626 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:54.107148 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:54.165260 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:54.167214 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:54.231896 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:54.606773 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:54.687550 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:54.688689 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:54.734811 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:55.107293 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:55.166313 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:55.168305 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:55.244174 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:55.508348 3324089 pod_ready.go:102] pod "metrics-server-c59844bb4-ql9z2" in "kube-system" namespace has status "Ready":"False"
	I0723 14:29:55.607772 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:55.666749 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:55.668376 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:55.747411 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:56.107236 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:56.166373 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:56.168253 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:56.231743 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:56.607397 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:56.670168 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:56.672374 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:56.755838 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:57.108245 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:57.165971 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:57.168416 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:57.231111 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:57.549140 3324089 pod_ready.go:102] pod "metrics-server-c59844bb4-ql9z2" in "kube-system" namespace has status "Ready":"False"
	I0723 14:29:57.607775 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:57.670699 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:57.676504 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:57.738186 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:58.106950 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:58.166667 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:58.167138 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:58.231233 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:58.607966 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:58.667501 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:58.668166 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:58.733932 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:59.110567 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:59.167183 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:59.167848 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:59.230740 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:29:59.606812 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:29:59.666135 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:29:59.666865 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0723 14:29:59.734014 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:00.029621 3324089 pod_ready.go:102] pod "metrics-server-c59844bb4-ql9z2" in "kube-system" namespace has status "Ready":"False"
	I0723 14:30:00.147640 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:30:00.171909 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:30:00.176716 3324089 kapi.go:107] duration metric: took 1m24.516605404s to wait for kubernetes.io/minikube-addons=registry ...
	I0723 14:30:00.244083 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:00.606779 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:30:00.665632 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:30:00.738144 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:01.107861 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:30:01.167567 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:30:01.245319 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:01.608114 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:30:01.667960 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:30:01.742016 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:02.107911 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:30:02.167450 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:30:02.235281 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:02.508864 3324089 pod_ready.go:102] pod "metrics-server-c59844bb4-ql9z2" in "kube-system" namespace has status "Ready":"False"
	I0723 14:30:02.608596 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:30:02.666929 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:30:02.748870 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:03.107410 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:30:03.166130 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:30:03.230627 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:03.606989 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:30:03.666816 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:30:03.737855 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:04.107122 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:30:04.166504 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:30:04.231366 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:04.611102 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:30:04.665580 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:30:04.733732 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:05.011186 3324089 pod_ready.go:102] pod "metrics-server-c59844bb4-ql9z2" in "kube-system" namespace has status "Ready":"False"
	I0723 14:30:05.106735 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:30:05.166109 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:30:05.231303 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:05.606793 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:30:05.666188 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:30:05.732969 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:06.107362 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:30:06.169635 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:30:06.231469 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:06.607260 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:30:06.666208 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:30:06.735411 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:07.011455 3324089 pod_ready.go:102] pod "metrics-server-c59844bb4-ql9z2" in "kube-system" namespace has status "Ready":"False"
	I0723 14:30:07.106455 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:30:07.165890 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:30:07.231077 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:07.607618 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:30:07.666103 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:30:07.737041 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:08.106785 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:30:08.165833 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:30:08.230569 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:08.606919 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:30:08.666590 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:30:08.731441 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:09.018089 3324089 pod_ready.go:102] pod "metrics-server-c59844bb4-ql9z2" in "kube-system" namespace has status "Ready":"False"
	I0723 14:30:09.107391 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:30:09.166923 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:30:09.230648 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:09.608322 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:30:09.676226 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:30:09.746255 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:10.108490 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:30:10.166001 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:30:10.230646 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:10.607057 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0723 14:30:10.665982 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:30:10.731885 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:11.108913 3324089 kapi.go:107] duration metric: took 1m31.005853741s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0723 14:30:11.111047 3324089 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-140056 cluster.
	I0723 14:30:11.112742 3324089 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0723 14:30:11.114776 3324089 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0723 14:30:11.165225 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:30:11.230820 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:11.507989 3324089 pod_ready.go:102] pod "metrics-server-c59844bb4-ql9z2" in "kube-system" namespace has status "Ready":"False"
	I0723 14:30:11.665801 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:30:11.732818 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:12.166482 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:30:12.235645 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:12.665789 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:30:12.753940 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:13.166576 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:30:13.231254 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:13.508197 3324089 pod_ready.go:102] pod "metrics-server-c59844bb4-ql9z2" in "kube-system" namespace has status "Ready":"False"
	I0723 14:30:13.666293 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:30:13.765088 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:14.166747 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:30:14.231102 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:14.667120 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:30:14.738637 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:15.168771 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:30:15.237172 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:15.509196 3324089 pod_ready.go:102] pod "metrics-server-c59844bb4-ql9z2" in "kube-system" namespace has status "Ready":"False"
	I0723 14:30:15.665800 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:30:15.741918 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:16.165792 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:30:16.230965 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:16.678379 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:30:16.733988 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:17.165979 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:30:17.230164 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:17.509523 3324089 pod_ready.go:102] pod "metrics-server-c59844bb4-ql9z2" in "kube-system" namespace has status "Ready":"False"
	I0723 14:30:17.666647 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:30:17.736984 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:18.165810 3324089 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0723 14:30:18.229971 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:18.665542 3324089 kapi.go:107] duration metric: took 1m43.004543556s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0723 14:30:18.741009 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:19.231193 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:19.512344 3324089 pod_ready.go:102] pod "metrics-server-c59844bb4-ql9z2" in "kube-system" namespace has status "Ready":"False"
	I0723 14:30:19.733452 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:20.232648 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:20.737023 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:21.230271 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:21.733281 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:22.009667 3324089 pod_ready.go:102] pod "metrics-server-c59844bb4-ql9z2" in "kube-system" namespace has status "Ready":"False"
	I0723 14:30:22.230962 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:22.733529 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:23.234261 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:23.733482 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:24.011688 3324089 pod_ready.go:102] pod "metrics-server-c59844bb4-ql9z2" in "kube-system" namespace has status "Ready":"False"
	I0723 14:30:24.231487 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:24.735094 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:25.232531 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:25.735478 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:26.011909 3324089 pod_ready.go:102] pod "metrics-server-c59844bb4-ql9z2" in "kube-system" namespace has status "Ready":"False"
	I0723 14:30:26.231159 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:26.734492 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:27.231412 3324089 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0723 14:30:27.732602 3324089 kapi.go:107] duration metric: took 1m51.507947444s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0723 14:30:27.734916 3324089 out.go:177] * Enabled addons: cloud-spanner, nvidia-device-plugin, storage-provisioner, ingress-dns, metrics-server, yakd, storage-provisioner-rancher, inspektor-gadget, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I0723 14:30:27.736557 3324089 addons.go:510] duration metric: took 1m58.745359938s for enable addons: enabled=[cloud-spanner nvidia-device-plugin storage-provisioner ingress-dns metrics-server yakd storage-provisioner-rancher inspektor-gadget volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I0723 14:30:28.013377 3324089 pod_ready.go:102] pod "metrics-server-c59844bb4-ql9z2" in "kube-system" namespace has status "Ready":"False"
	I0723 14:30:30.029288 3324089 pod_ready.go:102] pod "metrics-server-c59844bb4-ql9z2" in "kube-system" namespace has status "Ready":"False"
	I0723 14:30:32.508108 3324089 pod_ready.go:102] pod "metrics-server-c59844bb4-ql9z2" in "kube-system" namespace has status "Ready":"False"
	I0723 14:30:35.011351 3324089 pod_ready.go:102] pod "metrics-server-c59844bb4-ql9z2" in "kube-system" namespace has status "Ready":"False"
	I0723 14:30:36.010078 3324089 pod_ready.go:92] pod "metrics-server-c59844bb4-ql9z2" in "kube-system" namespace has status "Ready":"True"
	I0723 14:30:36.010110 3324089 pod_ready.go:81] duration metric: took 1m17.008512698s for pod "metrics-server-c59844bb4-ql9z2" in "kube-system" namespace to be "Ready" ...
	I0723 14:30:36.010124 3324089 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-rhfcp" in "kube-system" namespace to be "Ready" ...
	I0723 14:30:36.016710 3324089 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-rhfcp" in "kube-system" namespace has status "Ready":"True"
	I0723 14:30:36.016739 3324089 pod_ready.go:81] duration metric: took 6.604634ms for pod "nvidia-device-plugin-daemonset-rhfcp" in "kube-system" namespace to be "Ready" ...
	I0723 14:30:36.016762 3324089 pod_ready.go:38] duration metric: took 1m20.448322356s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0723 14:30:36.018002 3324089 api_server.go:52] waiting for apiserver process to appear ...
	I0723 14:30:36.019758 3324089 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 14:30:36.019850 3324089 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 14:30:36.072301 3324089 cri.go:89] found id: "a958daba0b9ba0c8d9d5c5311f8098a5bc3c2438bfa54b6e90f10aa09e37fd91"
	I0723 14:30:36.072324 3324089 cri.go:89] found id: ""
	I0723 14:30:36.072332 3324089 logs.go:276] 1 containers: [a958daba0b9ba0c8d9d5c5311f8098a5bc3c2438bfa54b6e90f10aa09e37fd91]
	I0723 14:30:36.072801 3324089 ssh_runner.go:195] Run: which crictl
	I0723 14:30:36.077756 3324089 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 14:30:36.077837 3324089 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 14:30:36.122182 3324089 cri.go:89] found id: "137c42a93cc7c36e96ae7e6a68be283b89d997b0b6f8ea281cc17fcd3f3eb8c9"
	I0723 14:30:36.122206 3324089 cri.go:89] found id: ""
	I0723 14:30:36.122214 3324089 logs.go:276] 1 containers: [137c42a93cc7c36e96ae7e6a68be283b89d997b0b6f8ea281cc17fcd3f3eb8c9]
	I0723 14:30:36.122278 3324089 ssh_runner.go:195] Run: which crictl
	I0723 14:30:36.126074 3324089 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 14:30:36.126192 3324089 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 14:30:36.173196 3324089 cri.go:89] found id: "0c63da520ba2b80db592eab81fc2f6f0721e424d20beb381aa34e4fc2e6cb76c"
	I0723 14:30:36.173217 3324089 cri.go:89] found id: ""
	I0723 14:30:36.173229 3324089 logs.go:276] 1 containers: [0c63da520ba2b80db592eab81fc2f6f0721e424d20beb381aa34e4fc2e6cb76c]
	I0723 14:30:36.173292 3324089 ssh_runner.go:195] Run: which crictl
	I0723 14:30:36.176835 3324089 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 14:30:36.176908 3324089 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 14:30:36.216340 3324089 cri.go:89] found id: "54c3777af6f927748ddcbd45b75cad5d3d1a83ae959204c489cb9fd6611b442d"
	I0723 14:30:36.216361 3324089 cri.go:89] found id: ""
	I0723 14:30:36.216369 3324089 logs.go:276] 1 containers: [54c3777af6f927748ddcbd45b75cad5d3d1a83ae959204c489cb9fd6611b442d]
	I0723 14:30:36.216451 3324089 ssh_runner.go:195] Run: which crictl
	I0723 14:30:36.220470 3324089 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 14:30:36.220550 3324089 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 14:30:36.271330 3324089 cri.go:89] found id: "82396ebc6d4766b0d7b5d3fd339d8d061a527ae22c008086b632902c95850437"
	I0723 14:30:36.271401 3324089 cri.go:89] found id: ""
	I0723 14:30:36.271422 3324089 logs.go:276] 1 containers: [82396ebc6d4766b0d7b5d3fd339d8d061a527ae22c008086b632902c95850437]
	I0723 14:30:36.271511 3324089 ssh_runner.go:195] Run: which crictl
	I0723 14:30:36.275008 3324089 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 14:30:36.275115 3324089 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 14:30:36.315432 3324089 cri.go:89] found id: "a58f73816b73096e221fa39bf12174e422e029bcf2493236fd089866bc393967"
	I0723 14:30:36.315455 3324089 cri.go:89] found id: ""
	I0723 14:30:36.315465 3324089 logs.go:276] 1 containers: [a58f73816b73096e221fa39bf12174e422e029bcf2493236fd089866bc393967]
	I0723 14:30:36.315525 3324089 ssh_runner.go:195] Run: which crictl
	I0723 14:30:36.319009 3324089 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 14:30:36.319082 3324089 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 14:30:36.360882 3324089 cri.go:89] found id: "bdb361c9cd9a14e56cb6d358f6ab2176e483982c5537b445da9ad7f02031b57f"
	I0723 14:30:36.360903 3324089 cri.go:89] found id: ""
	I0723 14:30:36.360911 3324089 logs.go:276] 1 containers: [bdb361c9cd9a14e56cb6d358f6ab2176e483982c5537b445da9ad7f02031b57f]
	I0723 14:30:36.360967 3324089 ssh_runner.go:195] Run: which crictl
	I0723 14:30:36.365330 3324089 logs.go:123] Gathering logs for etcd [137c42a93cc7c36e96ae7e6a68be283b89d997b0b6f8ea281cc17fcd3f3eb8c9] ...
	I0723 14:30:36.365358 3324089 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 137c42a93cc7c36e96ae7e6a68be283b89d997b0b6f8ea281cc17fcd3f3eb8c9"
	I0723 14:30:36.437161 3324089 logs.go:123] Gathering logs for kube-scheduler [54c3777af6f927748ddcbd45b75cad5d3d1a83ae959204c489cb9fd6611b442d] ...
	I0723 14:30:36.437198 3324089 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54c3777af6f927748ddcbd45b75cad5d3d1a83ae959204c489cb9fd6611b442d"
	I0723 14:30:36.500359 3324089 logs.go:123] Gathering logs for kube-controller-manager [a58f73816b73096e221fa39bf12174e422e029bcf2493236fd089866bc393967] ...
	I0723 14:30:36.500394 3324089 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a58f73816b73096e221fa39bf12174e422e029bcf2493236fd089866bc393967"
	I0723 14:30:36.571986 3324089 logs.go:123] Gathering logs for kindnet [bdb361c9cd9a14e56cb6d358f6ab2176e483982c5537b445da9ad7f02031b57f] ...
	I0723 14:30:36.572023 3324089 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bdb361c9cd9a14e56cb6d358f6ab2176e483982c5537b445da9ad7f02031b57f"
	I0723 14:30:36.626364 3324089 logs.go:123] Gathering logs for kubelet ...
	I0723 14:30:36.626400 3324089 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0723 14:30:36.661816 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: W0723 14:29:15.261241    1548 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-140056' and this object
	W0723 14:30:36.662127 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.261372    1548 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-140056' and this object
	W0723 14:30:36.664485 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: W0723 14:29:15.271322    1548 reflector.go:547] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-140056' and this object
	W0723 14:30:36.664698 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.271363    1548 reflector.go:150] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-140056' and this object
	W0723 14:30:36.664879 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: W0723 14:29:15.271626    1548 reflector.go:547] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-140056' and this object
	W0723 14:30:36.665081 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.271661    1548 reflector.go:150] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-140056' and this object
	W0723 14:30:36.665270 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: W0723 14:29:15.271627    1548 reflector.go:547] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-140056" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-140056' and this object
	W0723 14:30:36.665462 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.271686    1548 reflector.go:150] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-140056" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-140056' and this object
	W0723 14:30:36.667218 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: W0723 14:29:15.302024    1548 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-140056' and this object
	W0723 14:30:36.667442 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.302075    1548 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-140056' and this object
	W0723 14:30:36.667615 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: W0723 14:29:15.302140    1548 reflector.go:547] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-140056" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-140056' and this object
	W0723 14:30:36.667807 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.302153    1548 reflector.go:150] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-140056" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-140056' and this object
	W0723 14:30:36.667993 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: W0723 14:29:15.302194    1548 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-140056' and this object
	W0723 14:30:36.668198 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.302209    1548 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-140056' and this object
	W0723 14:30:36.668376 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: W0723 14:29:15.302254    1548 reflector.go:547] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-140056' and this object
	W0723 14:30:36.668575 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.302266    1548 reflector.go:150] object-"gcp-auth"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-140056' and this object
	W0723 14:30:36.668763 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: W0723 14:29:15.302313    1548 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-140056' and this object
	W0723 14:30:36.668967 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.302324    1548 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-140056' and this object
	W0723 14:30:36.669153 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: W0723 14:29:15.302381    1548 reflector.go:547] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-140056" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-140056' and this object
	W0723 14:30:36.669366 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.302392    1548 reflector.go:150] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-140056" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-140056' and this object
	W0723 14:30:36.669687 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: W0723 14:29:15.352325    1548 reflector.go:547] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-140056" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-140056' and this object
	W0723 14:30:36.669872 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.352392    1548 reflector.go:150] object-"kube-system"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-140056" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-140056' and this object
	I0723 14:30:36.711722 3324089 logs.go:123] Gathering logs for kube-apiserver [a958daba0b9ba0c8d9d5c5311f8098a5bc3c2438bfa54b6e90f10aa09e37fd91] ...
	I0723 14:30:36.711755 3324089 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a958daba0b9ba0c8d9d5c5311f8098a5bc3c2438bfa54b6e90f10aa09e37fd91"
	I0723 14:30:36.771692 3324089 logs.go:123] Gathering logs for coredns [0c63da520ba2b80db592eab81fc2f6f0721e424d20beb381aa34e4fc2e6cb76c] ...
	I0723 14:30:36.771733 3324089 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c63da520ba2b80db592eab81fc2f6f0721e424d20beb381aa34e4fc2e6cb76c"
	I0723 14:30:36.813680 3324089 logs.go:123] Gathering logs for kube-proxy [82396ebc6d4766b0d7b5d3fd339d8d061a527ae22c008086b632902c95850437] ...
	I0723 14:30:36.813710 3324089 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82396ebc6d4766b0d7b5d3fd339d8d061a527ae22c008086b632902c95850437"
	I0723 14:30:36.854363 3324089 logs.go:123] Gathering logs for CRI-O ...
	I0723 14:30:36.854390 3324089 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 14:30:36.946233 3324089 logs.go:123] Gathering logs for container status ...
	I0723 14:30:36.946273 3324089 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 14:30:37.016903 3324089 logs.go:123] Gathering logs for dmesg ...
	I0723 14:30:37.016956 3324089 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 14:30:37.041892 3324089 logs.go:123] Gathering logs for describe nodes ...
	I0723 14:30:37.041927 3324089 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 14:30:37.209083 3324089 out.go:304] Setting ErrFile to fd 2...
	I0723 14:30:37.209109 3324089 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0723 14:30:37.209265 3324089 out.go:239] X Problems detected in kubelet:
	W0723 14:30:37.209278 3324089 out.go:239]   Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.302324    1548 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-140056' and this object
	W0723 14:30:37.209286 3324089 out.go:239]   Jul 23 14:29:15 addons-140056 kubelet[1548]: W0723 14:29:15.302381    1548 reflector.go:547] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-140056" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-140056' and this object
	W0723 14:30:37.209328 3324089 out.go:239]   Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.302392    1548 reflector.go:150] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-140056" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-140056' and this object
	W0723 14:30:37.209359 3324089 out.go:239]   Jul 23 14:29:15 addons-140056 kubelet[1548]: W0723 14:29:15.352325    1548 reflector.go:547] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-140056" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-140056' and this object
	W0723 14:30:37.209366 3324089 out.go:239]   Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.352392    1548 reflector.go:150] object-"kube-system"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-140056" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-140056' and this object
	I0723 14:30:37.209378 3324089 out.go:304] Setting ErrFile to fd 2...
	I0723 14:30:37.209385 3324089 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 14:30:47.210735 3324089 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 14:30:47.228220 3324089 api_server.go:72] duration metric: took 2m18.237359054s to wait for apiserver process to appear ...
	I0723 14:30:47.228268 3324089 api_server.go:88] waiting for apiserver healthz status ...
	I0723 14:30:47.228332 3324089 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 14:30:47.228401 3324089 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 14:30:47.269505 3324089 cri.go:89] found id: "a958daba0b9ba0c8d9d5c5311f8098a5bc3c2438bfa54b6e90f10aa09e37fd91"
	I0723 14:30:47.269531 3324089 cri.go:89] found id: ""
	I0723 14:30:47.269539 3324089 logs.go:276] 1 containers: [a958daba0b9ba0c8d9d5c5311f8098a5bc3c2438bfa54b6e90f10aa09e37fd91]
	I0723 14:30:47.269624 3324089 ssh_runner.go:195] Run: which crictl
	I0723 14:30:47.273534 3324089 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 14:30:47.273605 3324089 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 14:30:47.315725 3324089 cri.go:89] found id: "137c42a93cc7c36e96ae7e6a68be283b89d997b0b6f8ea281cc17fcd3f3eb8c9"
	I0723 14:30:47.315745 3324089 cri.go:89] found id: ""
	I0723 14:30:47.315753 3324089 logs.go:276] 1 containers: [137c42a93cc7c36e96ae7e6a68be283b89d997b0b6f8ea281cc17fcd3f3eb8c9]
	I0723 14:30:47.315815 3324089 ssh_runner.go:195] Run: which crictl
	I0723 14:30:47.319753 3324089 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 14:30:47.319872 3324089 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 14:30:47.360343 3324089 cri.go:89] found id: "0c63da520ba2b80db592eab81fc2f6f0721e424d20beb381aa34e4fc2e6cb76c"
	I0723 14:30:47.360368 3324089 cri.go:89] found id: ""
	I0723 14:30:47.360376 3324089 logs.go:276] 1 containers: [0c63da520ba2b80db592eab81fc2f6f0721e424d20beb381aa34e4fc2e6cb76c]
	I0723 14:30:47.360440 3324089 ssh_runner.go:195] Run: which crictl
	I0723 14:30:47.364054 3324089 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 14:30:47.364182 3324089 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 14:30:47.404963 3324089 cri.go:89] found id: "54c3777af6f927748ddcbd45b75cad5d3d1a83ae959204c489cb9fd6611b442d"
	I0723 14:30:47.404986 3324089 cri.go:89] found id: ""
	I0723 14:30:47.404994 3324089 logs.go:276] 1 containers: [54c3777af6f927748ddcbd45b75cad5d3d1a83ae959204c489cb9fd6611b442d]
	I0723 14:30:47.405052 3324089 ssh_runner.go:195] Run: which crictl
	I0723 14:30:47.408542 3324089 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 14:30:47.408612 3324089 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 14:30:47.446412 3324089 cri.go:89] found id: "82396ebc6d4766b0d7b5d3fd339d8d061a527ae22c008086b632902c95850437"
	I0723 14:30:47.446438 3324089 cri.go:89] found id: ""
	I0723 14:30:47.446449 3324089 logs.go:276] 1 containers: [82396ebc6d4766b0d7b5d3fd339d8d061a527ae22c008086b632902c95850437]
	I0723 14:30:47.446520 3324089 ssh_runner.go:195] Run: which crictl
	I0723 14:30:47.450262 3324089 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 14:30:47.450340 3324089 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 14:30:47.488314 3324089 cri.go:89] found id: "a58f73816b73096e221fa39bf12174e422e029bcf2493236fd089866bc393967"
	I0723 14:30:47.488336 3324089 cri.go:89] found id: ""
	I0723 14:30:47.488344 3324089 logs.go:276] 1 containers: [a58f73816b73096e221fa39bf12174e422e029bcf2493236fd089866bc393967]
	I0723 14:30:47.488401 3324089 ssh_runner.go:195] Run: which crictl
	I0723 14:30:47.492066 3324089 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 14:30:47.492158 3324089 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 14:30:47.538826 3324089 cri.go:89] found id: "bdb361c9cd9a14e56cb6d358f6ab2176e483982c5537b445da9ad7f02031b57f"
	I0723 14:30:47.538846 3324089 cri.go:89] found id: ""
	I0723 14:30:47.538853 3324089 logs.go:276] 1 containers: [bdb361c9cd9a14e56cb6d358f6ab2176e483982c5537b445da9ad7f02031b57f]
	I0723 14:30:47.538912 3324089 ssh_runner.go:195] Run: which crictl
	I0723 14:30:47.543137 3324089 logs.go:123] Gathering logs for kube-apiserver [a958daba0b9ba0c8d9d5c5311f8098a5bc3c2438bfa54b6e90f10aa09e37fd91] ...
	I0723 14:30:47.543170 3324089 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a958daba0b9ba0c8d9d5c5311f8098a5bc3c2438bfa54b6e90f10aa09e37fd91"
	I0723 14:30:47.602911 3324089 logs.go:123] Gathering logs for etcd [137c42a93cc7c36e96ae7e6a68be283b89d997b0b6f8ea281cc17fcd3f3eb8c9] ...
	I0723 14:30:47.602946 3324089 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 137c42a93cc7c36e96ae7e6a68be283b89d997b0b6f8ea281cc17fcd3f3eb8c9"
	I0723 14:30:47.670819 3324089 logs.go:123] Gathering logs for kube-controller-manager [a58f73816b73096e221fa39bf12174e422e029bcf2493236fd089866bc393967] ...
	I0723 14:30:47.670853 3324089 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a58f73816b73096e221fa39bf12174e422e029bcf2493236fd089866bc393967"
	I0723 14:30:47.762514 3324089 logs.go:123] Gathering logs for kindnet [bdb361c9cd9a14e56cb6d358f6ab2176e483982c5537b445da9ad7f02031b57f] ...
	I0723 14:30:47.762603 3324089 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bdb361c9cd9a14e56cb6d358f6ab2176e483982c5537b445da9ad7f02031b57f"
	I0723 14:30:47.815271 3324089 logs.go:123] Gathering logs for CRI-O ...
	I0723 14:30:47.815306 3324089 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 14:30:47.919725 3324089 logs.go:123] Gathering logs for kubelet ...
	I0723 14:30:47.919804 3324089 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0723 14:30:47.961098 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: W0723 14:29:15.261241    1548 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-140056' and this object
	W0723 14:30:47.961343 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.261372    1548 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-140056' and this object
	W0723 14:30:47.963737 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: W0723 14:29:15.271322    1548 reflector.go:547] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-140056' and this object
	W0723 14:30:47.963956 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.271363    1548 reflector.go:150] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-140056' and this object
	W0723 14:30:47.964137 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: W0723 14:29:15.271626    1548 reflector.go:547] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-140056' and this object
	W0723 14:30:47.964336 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.271661    1548 reflector.go:150] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-140056' and this object
	W0723 14:30:47.964499 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: W0723 14:29:15.271627    1548 reflector.go:547] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-140056" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-140056' and this object
	W0723 14:30:47.964679 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.271686    1548 reflector.go:150] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-140056" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-140056' and this object
	W0723 14:30:47.966483 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: W0723 14:29:15.302024    1548 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-140056' and this object
	W0723 14:30:47.966704 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.302075    1548 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-140056' and this object
	W0723 14:30:47.966879 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: W0723 14:29:15.302140    1548 reflector.go:547] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-140056" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-140056' and this object
	W0723 14:30:47.967072 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.302153    1548 reflector.go:150] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-140056" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-140056' and this object
	W0723 14:30:47.967263 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: W0723 14:29:15.302194    1548 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-140056' and this object
	W0723 14:30:47.967470 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.302209    1548 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-140056' and this object
	W0723 14:30:47.967650 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: W0723 14:29:15.302254    1548 reflector.go:547] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-140056' and this object
	W0723 14:30:47.967849 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.302266    1548 reflector.go:150] object-"gcp-auth"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-140056' and this object
	W0723 14:30:47.968034 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: W0723 14:29:15.302313    1548 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-140056' and this object
	W0723 14:30:47.968246 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.302324    1548 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-140056' and this object
	W0723 14:30:47.968434 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: W0723 14:29:15.302381    1548 reflector.go:547] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-140056" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-140056' and this object
	W0723 14:30:47.968640 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.302392    1548 reflector.go:150] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-140056" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-140056' and this object
	W0723 14:30:47.968960 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: W0723 14:29:15.352325    1548 reflector.go:547] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-140056" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-140056' and this object
	W0723 14:30:47.969147 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.352392    1548 reflector.go:150] object-"kube-system"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-140056" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-140056' and this object
	I0723 14:30:48.012602 3324089 logs.go:123] Gathering logs for describe nodes ...
	I0723 14:30:48.012647 3324089 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 14:30:48.187330 3324089 logs.go:123] Gathering logs for coredns [0c63da520ba2b80db592eab81fc2f6f0721e424d20beb381aa34e4fc2e6cb76c] ...
	I0723 14:30:48.187358 3324089 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c63da520ba2b80db592eab81fc2f6f0721e424d20beb381aa34e4fc2e6cb76c"
	I0723 14:30:48.232203 3324089 logs.go:123] Gathering logs for kube-scheduler [54c3777af6f927748ddcbd45b75cad5d3d1a83ae959204c489cb9fd6611b442d] ...
	I0723 14:30:48.232237 3324089 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54c3777af6f927748ddcbd45b75cad5d3d1a83ae959204c489cb9fd6611b442d"
	I0723 14:30:48.284936 3324089 logs.go:123] Gathering logs for kube-proxy [82396ebc6d4766b0d7b5d3fd339d8d061a527ae22c008086b632902c95850437] ...
	I0723 14:30:48.284985 3324089 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82396ebc6d4766b0d7b5d3fd339d8d061a527ae22c008086b632902c95850437"
	I0723 14:30:48.322311 3324089 logs.go:123] Gathering logs for container status ...
	I0723 14:30:48.322340 3324089 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 14:30:48.374834 3324089 logs.go:123] Gathering logs for dmesg ...
	I0723 14:30:48.374863 3324089 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 14:30:48.393715 3324089 out.go:304] Setting ErrFile to fd 2...
	I0723 14:30:48.393738 3324089 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0723 14:30:48.393822 3324089 out.go:239] X Problems detected in kubelet:
	W0723 14:30:48.393838 3324089 out.go:239]   Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.302324    1548 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-140056' and this object
	W0723 14:30:48.393846 3324089 out.go:239]   Jul 23 14:29:15 addons-140056 kubelet[1548]: W0723 14:29:15.302381    1548 reflector.go:547] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-140056" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-140056' and this object
	W0723 14:30:48.393975 3324089 out.go:239]   Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.302392    1548 reflector.go:150] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-140056" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-140056' and this object
	W0723 14:30:48.393995 3324089 out.go:239]   Jul 23 14:29:15 addons-140056 kubelet[1548]: W0723 14:29:15.352325    1548 reflector.go:547] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-140056" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-140056' and this object
	W0723 14:30:48.394011 3324089 out.go:239]   Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.352392    1548 reflector.go:150] object-"kube-system"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-140056" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-140056' and this object
	I0723 14:30:48.394018 3324089 out.go:304] Setting ErrFile to fd 2...
	I0723 14:30:48.394027 3324089 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 14:30:58.395042 3324089 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0723 14:30:58.403031 3324089 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0723 14:30:58.404037 3324089 api_server.go:141] control plane version: v1.30.3
	I0723 14:30:58.404065 3324089 api_server.go:131] duration metric: took 11.17578447s to wait for apiserver health ...
	I0723 14:30:58.404075 3324089 system_pods.go:43] waiting for kube-system pods to appear ...
	I0723 14:30:58.404096 3324089 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0723 14:30:58.404166 3324089 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0723 14:30:58.460759 3324089 cri.go:89] found id: "a958daba0b9ba0c8d9d5c5311f8098a5bc3c2438bfa54b6e90f10aa09e37fd91"
	I0723 14:30:58.460780 3324089 cri.go:89] found id: ""
	I0723 14:30:58.460788 3324089 logs.go:276] 1 containers: [a958daba0b9ba0c8d9d5c5311f8098a5bc3c2438bfa54b6e90f10aa09e37fd91]
	I0723 14:30:58.460847 3324089 ssh_runner.go:195] Run: which crictl
	I0723 14:30:58.464276 3324089 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0723 14:30:58.464352 3324089 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0723 14:30:58.500803 3324089 cri.go:89] found id: "137c42a93cc7c36e96ae7e6a68be283b89d997b0b6f8ea281cc17fcd3f3eb8c9"
	I0723 14:30:58.500822 3324089 cri.go:89] found id: ""
	I0723 14:30:58.500831 3324089 logs.go:276] 1 containers: [137c42a93cc7c36e96ae7e6a68be283b89d997b0b6f8ea281cc17fcd3f3eb8c9]
	I0723 14:30:58.500886 3324089 ssh_runner.go:195] Run: which crictl
	I0723 14:30:58.504441 3324089 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0723 14:30:58.504514 3324089 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0723 14:30:58.541804 3324089 cri.go:89] found id: "0c63da520ba2b80db592eab81fc2f6f0721e424d20beb381aa34e4fc2e6cb76c"
	I0723 14:30:58.541823 3324089 cri.go:89] found id: ""
	I0723 14:30:58.541831 3324089 logs.go:276] 1 containers: [0c63da520ba2b80db592eab81fc2f6f0721e424d20beb381aa34e4fc2e6cb76c]
	I0723 14:30:58.541885 3324089 ssh_runner.go:195] Run: which crictl
	I0723 14:30:58.545534 3324089 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0723 14:30:58.545600 3324089 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0723 14:30:58.583913 3324089 cri.go:89] found id: "54c3777af6f927748ddcbd45b75cad5d3d1a83ae959204c489cb9fd6611b442d"
	I0723 14:30:58.583936 3324089 cri.go:89] found id: ""
	I0723 14:30:58.583944 3324089 logs.go:276] 1 containers: [54c3777af6f927748ddcbd45b75cad5d3d1a83ae959204c489cb9fd6611b442d]
	I0723 14:30:58.583999 3324089 ssh_runner.go:195] Run: which crictl
	I0723 14:30:58.588604 3324089 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0723 14:30:58.588675 3324089 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0723 14:30:58.626753 3324089 cri.go:89] found id: "82396ebc6d4766b0d7b5d3fd339d8d061a527ae22c008086b632902c95850437"
	I0723 14:30:58.626775 3324089 cri.go:89] found id: ""
	I0723 14:30:58.626783 3324089 logs.go:276] 1 containers: [82396ebc6d4766b0d7b5d3fd339d8d061a527ae22c008086b632902c95850437]
	I0723 14:30:58.626839 3324089 ssh_runner.go:195] Run: which crictl
	I0723 14:30:58.630452 3324089 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0723 14:30:58.630598 3324089 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0723 14:30:58.674868 3324089 cri.go:89] found id: "a58f73816b73096e221fa39bf12174e422e029bcf2493236fd089866bc393967"
	I0723 14:30:58.674888 3324089 cri.go:89] found id: ""
	I0723 14:30:58.674896 3324089 logs.go:276] 1 containers: [a58f73816b73096e221fa39bf12174e422e029bcf2493236fd089866bc393967]
	I0723 14:30:58.674959 3324089 ssh_runner.go:195] Run: which crictl
	I0723 14:30:58.678606 3324089 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0723 14:30:58.678689 3324089 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0723 14:30:58.717849 3324089 cri.go:89] found id: "bdb361c9cd9a14e56cb6d358f6ab2176e483982c5537b445da9ad7f02031b57f"
	I0723 14:30:58.717874 3324089 cri.go:89] found id: ""
	I0723 14:30:58.717882 3324089 logs.go:276] 1 containers: [bdb361c9cd9a14e56cb6d358f6ab2176e483982c5537b445da9ad7f02031b57f]
	I0723 14:30:58.717937 3324089 ssh_runner.go:195] Run: which crictl
	I0723 14:30:58.721890 3324089 logs.go:123] Gathering logs for kubelet ...
	I0723 14:30:58.721918 3324089 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0723 14:30:58.764391 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: W0723 14:29:15.261241    1548 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-140056' and this object
	W0723 14:30:58.764630 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.261372    1548 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-140056' and this object
	W0723 14:30:58.766939 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: W0723 14:29:15.271322    1548 reflector.go:547] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-140056' and this object
	W0723 14:30:58.767152 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.271363    1548 reflector.go:150] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-140056' and this object
	W0723 14:30:58.767333 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: W0723 14:29:15.271626    1548 reflector.go:547] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-140056' and this object
	W0723 14:30:58.767533 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.271661    1548 reflector.go:150] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-140056' and this object
	W0723 14:30:58.767697 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: W0723 14:29:15.271627    1548 reflector.go:547] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-140056" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-140056' and this object
	W0723 14:30:58.767879 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.271686    1548 reflector.go:150] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-140056" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-140056' and this object
	W0723 14:30:58.769636 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: W0723 14:29:15.302024    1548 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-140056' and this object
	W0723 14:30:58.769845 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.302075    1548 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-140056' and this object
	W0723 14:30:58.770018 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: W0723 14:29:15.302140    1548 reflector.go:547] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-140056" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-140056' and this object
	W0723 14:30:58.770210 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.302153    1548 reflector.go:150] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-140056" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-140056' and this object
	W0723 14:30:58.770396 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: W0723 14:29:15.302194    1548 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-140056' and this object
	W0723 14:30:58.770612 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.302209    1548 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-140056' and this object
	W0723 14:30:58.770791 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: W0723 14:29:15.302254    1548 reflector.go:547] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-140056' and this object
	W0723 14:30:58.770991 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.302266    1548 reflector.go:150] object-"gcp-auth"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-140056' and this object
	W0723 14:30:58.771184 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: W0723 14:29:15.302313    1548 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-140056' and this object
	W0723 14:30:58.771392 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.302324    1548 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-140056' and this object
	W0723 14:30:58.771579 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: W0723 14:29:15.302381    1548 reflector.go:547] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-140056" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-140056' and this object
	W0723 14:30:58.771785 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.302392    1548 reflector.go:150] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-140056" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-140056' and this object
	W0723 14:30:58.772106 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: W0723 14:29:15.352325    1548 reflector.go:547] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-140056" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-140056' and this object
	W0723 14:30:58.772290 3324089 logs.go:138] Found kubelet problem: Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.352392    1548 reflector.go:150] object-"kube-system"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-140056" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-140056' and this object
	I0723 14:30:58.815554 3324089 logs.go:123] Gathering logs for describe nodes ...
	I0723 14:30:58.815580 3324089 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0723 14:30:58.942348 3324089 logs.go:123] Gathering logs for coredns [0c63da520ba2b80db592eab81fc2f6f0721e424d20beb381aa34e4fc2e6cb76c] ...
	I0723 14:30:58.942458 3324089 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c63da520ba2b80db592eab81fc2f6f0721e424d20beb381aa34e4fc2e6cb76c"
	I0723 14:30:58.984159 3324089 logs.go:123] Gathering logs for kube-scheduler [54c3777af6f927748ddcbd45b75cad5d3d1a83ae959204c489cb9fd6611b442d] ...
	I0723 14:30:58.984196 3324089 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54c3777af6f927748ddcbd45b75cad5d3d1a83ae959204c489cb9fd6611b442d"
	I0723 14:30:59.037940 3324089 logs.go:123] Gathering logs for kube-proxy [82396ebc6d4766b0d7b5d3fd339d8d061a527ae22c008086b632902c95850437] ...
	I0723 14:30:59.037973 3324089 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82396ebc6d4766b0d7b5d3fd339d8d061a527ae22c008086b632902c95850437"
	I0723 14:30:59.078449 3324089 logs.go:123] Gathering logs for kube-controller-manager [a58f73816b73096e221fa39bf12174e422e029bcf2493236fd089866bc393967] ...
	I0723 14:30:59.078478 3324089 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a58f73816b73096e221fa39bf12174e422e029bcf2493236fd089866bc393967"
	I0723 14:30:59.175340 3324089 logs.go:123] Gathering logs for kindnet [bdb361c9cd9a14e56cb6d358f6ab2176e483982c5537b445da9ad7f02031b57f] ...
	I0723 14:30:59.175379 3324089 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bdb361c9cd9a14e56cb6d358f6ab2176e483982c5537b445da9ad7f02031b57f"
	I0723 14:30:59.223478 3324089 logs.go:123] Gathering logs for container status ...
	I0723 14:30:59.223511 3324089 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0723 14:30:59.289254 3324089 logs.go:123] Gathering logs for dmesg ...
	I0723 14:30:59.289285 3324089 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0723 14:30:59.308508 3324089 logs.go:123] Gathering logs for kube-apiserver [a958daba0b9ba0c8d9d5c5311f8098a5bc3c2438bfa54b6e90f10aa09e37fd91] ...
	I0723 14:30:59.308550 3324089 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a958daba0b9ba0c8d9d5c5311f8098a5bc3c2438bfa54b6e90f10aa09e37fd91"
	I0723 14:30:59.394637 3324089 logs.go:123] Gathering logs for etcd [137c42a93cc7c36e96ae7e6a68be283b89d997b0b6f8ea281cc17fcd3f3eb8c9] ...
	I0723 14:30:59.394670 3324089 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 137c42a93cc7c36e96ae7e6a68be283b89d997b0b6f8ea281cc17fcd3f3eb8c9"
	I0723 14:30:59.463095 3324089 logs.go:123] Gathering logs for CRI-O ...
	I0723 14:30:59.463129 3324089 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0723 14:30:59.571293 3324089 out.go:304] Setting ErrFile to fd 2...
	I0723 14:30:59.571354 3324089 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0723 14:30:59.571436 3324089 out.go:239] X Problems detected in kubelet:
	W0723 14:30:59.571580 3324089 out.go:239]   Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.302324    1548 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-140056" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-140056' and this object
	W0723 14:30:59.571594 3324089 out.go:239]   Jul 23 14:29:15 addons-140056 kubelet[1548]: W0723 14:29:15.302381    1548 reflector.go:547] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-140056" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-140056' and this object
	W0723 14:30:59.571624 3324089 out.go:239]   Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.302392    1548 reflector.go:150] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-140056" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-140056' and this object
	W0723 14:30:59.571637 3324089 out.go:239]   Jul 23 14:29:15 addons-140056 kubelet[1548]: W0723 14:29:15.352325    1548 reflector.go:547] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-140056" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-140056' and this object
	W0723 14:30:59.571647 3324089 out.go:239]   Jul 23 14:29:15 addons-140056 kubelet[1548]: E0723 14:29:15.352392    1548 reflector.go:150] object-"kube-system"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-140056" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-140056' and this object
	I0723 14:30:59.571653 3324089 out.go:304] Setting ErrFile to fd 2...
	I0723 14:30:59.571660 3324089 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 14:31:09.583936 3324089 system_pods.go:59] 18 kube-system pods found
	I0723 14:31:09.583990 3324089 system_pods.go:61] "coredns-7db6d8ff4d-jgz96" [3ec14c0f-c68d-4fd5-8582-5459477e40f5] Running
	I0723 14:31:09.583997 3324089 system_pods.go:61] "csi-hostpath-attacher-0" [3168c014-5a39-4ad9-bca0-efd7be769099] Running
	I0723 14:31:09.584002 3324089 system_pods.go:61] "csi-hostpath-resizer-0" [6cd26b68-041c-4848-8141-53baaab748f2] Running
	I0723 14:31:09.584007 3324089 system_pods.go:61] "csi-hostpathplugin-s9wmq" [271af35f-c33e-4782-ac08-d1c6e905f4b9] Running
	I0723 14:31:09.584011 3324089 system_pods.go:61] "etcd-addons-140056" [a774b4b3-a8ab-4841-b224-b8ae6f3ca338] Running
	I0723 14:31:09.584015 3324089 system_pods.go:61] "kindnet-2f7s4" [b028186c-e060-45cd-b380-c68f5957f6e8] Running
	I0723 14:31:09.584019 3324089 system_pods.go:61] "kube-apiserver-addons-140056" [e00d998c-7953-483b-b95e-44629436c611] Running
	I0723 14:31:09.584023 3324089 system_pods.go:61] "kube-controller-manager-addons-140056" [4a59bc12-47b7-4b80-8799-2297b8a54676] Running
	I0723 14:31:09.584028 3324089 system_pods.go:61] "kube-ingress-dns-minikube" [f19d23b6-9b9b-4771-aeaf-40a41665b578] Running
	I0723 14:31:09.584033 3324089 system_pods.go:61] "kube-proxy-qch7m" [ae8a5d47-ee7a-4d28-a940-13c073ba54b1] Running
	I0723 14:31:09.584037 3324089 system_pods.go:61] "kube-scheduler-addons-140056" [4b4bba11-865d-4a9d-97d1-5c4c0c60db06] Running
	I0723 14:31:09.584042 3324089 system_pods.go:61] "metrics-server-c59844bb4-ql9z2" [624cee58-45f6-4199-bfae-0fb883077e3f] Running
	I0723 14:31:09.584053 3324089 system_pods.go:61] "nvidia-device-plugin-daemonset-rhfcp" [724260a7-4c1d-4daf-a392-8f7cf7efaa06] Running
	I0723 14:31:09.584063 3324089 system_pods.go:61] "registry-656c9c8d9c-pjd4j" [1859702d-c9a6-460d-81c6-102ef98b706b] Running
	I0723 14:31:09.584067 3324089 system_pods.go:61] "registry-proxy-g8j86" [9477b7ff-d5fd-48f9-ad75-25e57440ab34] Running
	I0723 14:31:09.584071 3324089 system_pods.go:61] "snapshot-controller-745499f584-8fqv4" [3b243cc0-c3dc-4dab-975e-450249ec2899] Running
	I0723 14:31:09.584074 3324089 system_pods.go:61] "snapshot-controller-745499f584-drrj2" [b0844d73-10ab-444a-9a27-9c7b26a76450] Running
	I0723 14:31:09.584078 3324089 system_pods.go:61] "storage-provisioner" [ba9d48df-c1eb-455d-973a-5a8b814e6290] Running
	I0723 14:31:09.584084 3324089 system_pods.go:74] duration metric: took 11.180002701s to wait for pod list to return data ...
	I0723 14:31:09.584095 3324089 default_sa.go:34] waiting for default service account to be created ...
	I0723 14:31:09.586634 3324089 default_sa.go:45] found service account: "default"
	I0723 14:31:09.586662 3324089 default_sa.go:55] duration metric: took 2.558809ms for default service account to be created ...
	I0723 14:31:09.586673 3324089 system_pods.go:116] waiting for k8s-apps to be running ...
	I0723 14:31:09.595962 3324089 system_pods.go:86] 18 kube-system pods found
	I0723 14:31:09.596004 3324089 system_pods.go:89] "coredns-7db6d8ff4d-jgz96" [3ec14c0f-c68d-4fd5-8582-5459477e40f5] Running
	I0723 14:31:09.596011 3324089 system_pods.go:89] "csi-hostpath-attacher-0" [3168c014-5a39-4ad9-bca0-efd7be769099] Running
	I0723 14:31:09.596015 3324089 system_pods.go:89] "csi-hostpath-resizer-0" [6cd26b68-041c-4848-8141-53baaab748f2] Running
	I0723 14:31:09.596020 3324089 system_pods.go:89] "csi-hostpathplugin-s9wmq" [271af35f-c33e-4782-ac08-d1c6e905f4b9] Running
	I0723 14:31:09.596024 3324089 system_pods.go:89] "etcd-addons-140056" [a774b4b3-a8ab-4841-b224-b8ae6f3ca338] Running
	I0723 14:31:09.596029 3324089 system_pods.go:89] "kindnet-2f7s4" [b028186c-e060-45cd-b380-c68f5957f6e8] Running
	I0723 14:31:09.596033 3324089 system_pods.go:89] "kube-apiserver-addons-140056" [e00d998c-7953-483b-b95e-44629436c611] Running
	I0723 14:31:09.596037 3324089 system_pods.go:89] "kube-controller-manager-addons-140056" [4a59bc12-47b7-4b80-8799-2297b8a54676] Running
	I0723 14:31:09.596041 3324089 system_pods.go:89] "kube-ingress-dns-minikube" [f19d23b6-9b9b-4771-aeaf-40a41665b578] Running
	I0723 14:31:09.596045 3324089 system_pods.go:89] "kube-proxy-qch7m" [ae8a5d47-ee7a-4d28-a940-13c073ba54b1] Running
	I0723 14:31:09.596049 3324089 system_pods.go:89] "kube-scheduler-addons-140056" [4b4bba11-865d-4a9d-97d1-5c4c0c60db06] Running
	I0723 14:31:09.596053 3324089 system_pods.go:89] "metrics-server-c59844bb4-ql9z2" [624cee58-45f6-4199-bfae-0fb883077e3f] Running
	I0723 14:31:09.596057 3324089 system_pods.go:89] "nvidia-device-plugin-daemonset-rhfcp" [724260a7-4c1d-4daf-a392-8f7cf7efaa06] Running
	I0723 14:31:09.596061 3324089 system_pods.go:89] "registry-656c9c8d9c-pjd4j" [1859702d-c9a6-460d-81c6-102ef98b706b] Running
	I0723 14:31:09.596065 3324089 system_pods.go:89] "registry-proxy-g8j86" [9477b7ff-d5fd-48f9-ad75-25e57440ab34] Running
	I0723 14:31:09.596070 3324089 system_pods.go:89] "snapshot-controller-745499f584-8fqv4" [3b243cc0-c3dc-4dab-975e-450249ec2899] Running
	I0723 14:31:09.596079 3324089 system_pods.go:89] "snapshot-controller-745499f584-drrj2" [b0844d73-10ab-444a-9a27-9c7b26a76450] Running
	I0723 14:31:09.596084 3324089 system_pods.go:89] "storage-provisioner" [ba9d48df-c1eb-455d-973a-5a8b814e6290] Running
	I0723 14:31:09.596094 3324089 system_pods.go:126] duration metric: took 9.415409ms to wait for k8s-apps to be running ...
	I0723 14:31:09.596518 3324089 system_svc.go:44] waiting for kubelet service to be running ....
	I0723 14:31:09.596598 3324089 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 14:31:09.609164 3324089 system_svc.go:56] duration metric: took 13.051218ms WaitForService to wait for kubelet
	I0723 14:31:09.609198 3324089 kubeadm.go:582] duration metric: took 2m40.618344346s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0723 14:31:09.609223 3324089 node_conditions.go:102] verifying NodePressure condition ...
	I0723 14:31:09.613035 3324089 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0723 14:31:09.613068 3324089 node_conditions.go:123] node cpu capacity is 2
	I0723 14:31:09.613079 3324089 node_conditions.go:105] duration metric: took 3.851091ms to run NodePressure ...
	I0723 14:31:09.613092 3324089 start.go:241] waiting for startup goroutines ...
	I0723 14:31:09.613099 3324089 start.go:246] waiting for cluster config update ...
	I0723 14:31:09.613118 3324089 start.go:255] writing updated cluster config ...
	I0723 14:31:09.613400 3324089 ssh_runner.go:195] Run: rm -f paused
	I0723 14:31:09.957052 3324089 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0723 14:31:09.959226 3324089 out.go:177] * Done! kubectl is now configured to use "addons-140056" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 23 14:35:15 addons-140056 crio[967]: time="2024-07-23 14:35:15.592709462Z" level=info msg="Removed container 14bde39b359153029d3fcace4c5d42045f76118382ce8738771bbe8442d2ef14: ingress-nginx/ingress-nginx-admission-patch-snt8v/patch" id=9981d075-2877-4334-a50c-908de9afdf7a name=/runtime.v1.RuntimeService/RemoveContainer
	Jul 23 14:35:15 addons-140056 crio[967]: time="2024-07-23 14:35:15.594062906Z" level=info msg="Removing container: 9bf4698e1ef672f3f6e6ce8160379f784e70470830e1f43e417cddf9e252fb01" id=6213553e-38d0-44a1-9c39-f7cb26ae3373 name=/runtime.v1.RuntimeService/RemoveContainer
	Jul 23 14:35:15 addons-140056 crio[967]: time="2024-07-23 14:35:15.613387594Z" level=info msg="Removed container 9bf4698e1ef672f3f6e6ce8160379f784e70470830e1f43e417cddf9e252fb01: ingress-nginx/ingress-nginx-admission-create-hhs2n/create" id=6213553e-38d0-44a1-9c39-f7cb26ae3373 name=/runtime.v1.RuntimeService/RemoveContainer
	Jul 23 14:35:15 addons-140056 crio[967]: time="2024-07-23 14:35:15.614984323Z" level=info msg="Stopping pod sandbox: 52b6e1e3a5c129be435fe7c6e89a6ca068e4b03c84474763875ea94670e48ba8" id=0602277f-c262-456a-83d9-211947aae634 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 23 14:35:15 addons-140056 crio[967]: time="2024-07-23 14:35:15.615026768Z" level=info msg="Stopped pod sandbox (already stopped): 52b6e1e3a5c129be435fe7c6e89a6ca068e4b03c84474763875ea94670e48ba8" id=0602277f-c262-456a-83d9-211947aae634 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 23 14:35:15 addons-140056 crio[967]: time="2024-07-23 14:35:15.615384196Z" level=info msg="Removing pod sandbox: 52b6e1e3a5c129be435fe7c6e89a6ca068e4b03c84474763875ea94670e48ba8" id=9439b967-2775-49fa-881c-ba27c759cca8 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jul 23 14:35:15 addons-140056 crio[967]: time="2024-07-23 14:35:15.623696977Z" level=info msg="Removed pod sandbox: 52b6e1e3a5c129be435fe7c6e89a6ca068e4b03c84474763875ea94670e48ba8" id=9439b967-2775-49fa-881c-ba27c759cca8 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jul 23 14:35:15 addons-140056 crio[967]: time="2024-07-23 14:35:15.624300824Z" level=info msg="Stopping pod sandbox: a8077b9fff75a23da421bda46b5251dd10b404be4b0ea7f794311871b035ca01" id=a8c326ec-378f-46fd-a9a4-cd815ae844c1 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 23 14:35:15 addons-140056 crio[967]: time="2024-07-23 14:35:15.624352575Z" level=info msg="Stopped pod sandbox (already stopped): a8077b9fff75a23da421bda46b5251dd10b404be4b0ea7f794311871b035ca01" id=a8c326ec-378f-46fd-a9a4-cd815ae844c1 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 23 14:35:15 addons-140056 crio[967]: time="2024-07-23 14:35:15.624633792Z" level=info msg="Removing pod sandbox: a8077b9fff75a23da421bda46b5251dd10b404be4b0ea7f794311871b035ca01" id=fdf139ee-70bd-4d88-ad43-1c36c264b2e7 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jul 23 14:35:15 addons-140056 crio[967]: time="2024-07-23 14:35:15.633016662Z" level=info msg="Removed pod sandbox: a8077b9fff75a23da421bda46b5251dd10b404be4b0ea7f794311871b035ca01" id=fdf139ee-70bd-4d88-ad43-1c36c264b2e7 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jul 23 14:35:15 addons-140056 crio[967]: time="2024-07-23 14:35:15.633558954Z" level=info msg="Stopping pod sandbox: 646d31216b1f841d4e240f3ec083a827d3752d312f54dcda1ba5a6accedccb8a" id=79e8a938-5df6-4a7c-9f7d-9198f59dbeb9 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 23 14:35:15 addons-140056 crio[967]: time="2024-07-23 14:35:15.633595869Z" level=info msg="Stopped pod sandbox (already stopped): 646d31216b1f841d4e240f3ec083a827d3752d312f54dcda1ba5a6accedccb8a" id=79e8a938-5df6-4a7c-9f7d-9198f59dbeb9 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 23 14:35:15 addons-140056 crio[967]: time="2024-07-23 14:35:15.633914519Z" level=info msg="Removing pod sandbox: 646d31216b1f841d4e240f3ec083a827d3752d312f54dcda1ba5a6accedccb8a" id=211643c9-3257-4e2d-b9aa-c074fc46a74f name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jul 23 14:35:15 addons-140056 crio[967]: time="2024-07-23 14:35:15.646235545Z" level=info msg="Removed pod sandbox: 646d31216b1f841d4e240f3ec083a827d3752d312f54dcda1ba5a6accedccb8a" id=211643c9-3257-4e2d-b9aa-c074fc46a74f name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jul 23 14:35:15 addons-140056 crio[967]: time="2024-07-23 14:35:15.647807281Z" level=info msg="Stopping pod sandbox: fff5df7e6f10d30af65cf13e8e8da7a7b36821de4edaed150534aed71c700d90" id=0adbd1d9-0a4f-4d93-96b0-c4ab7b2f97dd name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 23 14:35:15 addons-140056 crio[967]: time="2024-07-23 14:35:15.647849390Z" level=info msg="Stopped pod sandbox (already stopped): fff5df7e6f10d30af65cf13e8e8da7a7b36821de4edaed150534aed71c700d90" id=0adbd1d9-0a4f-4d93-96b0-c4ab7b2f97dd name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 23 14:35:15 addons-140056 crio[967]: time="2024-07-23 14:35:15.648196045Z" level=info msg="Removing pod sandbox: fff5df7e6f10d30af65cf13e8e8da7a7b36821de4edaed150534aed71c700d90" id=3fa7e0d6-3a7e-470d-903f-9704019141e0 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jul 23 14:35:15 addons-140056 crio[967]: time="2024-07-23 14:35:15.656484423Z" level=info msg="Removed pod sandbox: fff5df7e6f10d30af65cf13e8e8da7a7b36821de4edaed150534aed71c700d90" id=3fa7e0d6-3a7e-470d-903f-9704019141e0 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jul 23 14:37:38 addons-140056 crio[967]: time="2024-07-23 14:37:38.588310067Z" level=info msg="Stopping container: 3019c06c5c17140bbea39ca7d92cf2beaf912feb205ae022abf1897ae8a5a3c4 (timeout: 30s)" id=b8f16895-126f-4ba5-92b5-334533d76c8a name=/runtime.v1.RuntimeService/StopContainer
	Jul 23 14:37:39 addons-140056 crio[967]: time="2024-07-23 14:37:39.764999857Z" level=info msg="Stopped container 3019c06c5c17140bbea39ca7d92cf2beaf912feb205ae022abf1897ae8a5a3c4: kube-system/metrics-server-c59844bb4-ql9z2/metrics-server" id=b8f16895-126f-4ba5-92b5-334533d76c8a name=/runtime.v1.RuntimeService/StopContainer
	Jul 23 14:37:39 addons-140056 crio[967]: time="2024-07-23 14:37:39.765494857Z" level=info msg="Stopping pod sandbox: 3a4ce44ed17ac97a0d08e04221b193ea1dfb87993ba6b8bcfee5bf7d8a97cd72" id=4400a11b-96a1-4e7d-9195-e07671e37803 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 23 14:37:39 addons-140056 crio[967]: time="2024-07-23 14:37:39.765723234Z" level=info msg="Got pod network &{Name:metrics-server-c59844bb4-ql9z2 Namespace:kube-system ID:3a4ce44ed17ac97a0d08e04221b193ea1dfb87993ba6b8bcfee5bf7d8a97cd72 UID:624cee58-45f6-4199-bfae-0fb883077e3f NetNS:/var/run/netns/b758ed1f-1c9b-481e-ad79-3fd16a43fa25 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jul 23 14:37:39 addons-140056 crio[967]: time="2024-07-23 14:37:39.765866104Z" level=info msg="Deleting pod kube-system_metrics-server-c59844bb4-ql9z2 from CNI network \"kindnet\" (type=ptp)"
	Jul 23 14:37:39 addons-140056 crio[967]: time="2024-07-23 14:37:39.820868040Z" level=info msg="Stopped pod sandbox: 3a4ce44ed17ac97a0d08e04221b193ea1dfb87993ba6b8bcfee5bf7d8a97cd72" id=4400a11b-96a1-4e7d-9195-e07671e37803 name=/runtime.v1.RuntimeService/StopPodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2deccc7daf59e       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   f5c6f7d5071cd       hello-world-app-6778b5fc9f-4zn9v
	4755e5aeed108       docker.io/library/nginx@sha256:1e67a3c8607fe555f47dc8a72f25424b10273639136c061c508628da3112f90e                         5 minutes ago       Running             nginx                     0                   142f182854052       nginx
	651c584db6839       ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37                   6 minutes ago       Running             headlamp                  0                   871491cc55c8d       headlamp-7867546754-xc2vd
	6bdf0ac15bdda       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:a40e1a121ee367d1712ac3a54ec9c38c405a65dde923c98e5fa6368fa82c4b69            7 minutes ago       Running             gcp-auth                  0                   f3a5848d068ae       gcp-auth-5db96cd9b4-b42k7
	87c4ce0f65f7b       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                         7 minutes ago       Running             yakd                      0                   9d9ce7531b9a3       yakd-dashboard-799879c74f-jkkhq
	3019c06c5c171       registry.k8s.io/metrics-server/metrics-server@sha256:7f0fc3565b6d4655d078bb8e250d0423d7c79aeb05fbc71e1ffa6ff664264d70   8 minutes ago       Exited              metrics-server            0                   3a4ce44ed17ac       metrics-server-c59844bb4-ql9z2
	0c63da520ba2b       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93                                                        8 minutes ago       Running             coredns                   0                   7a0819fadd8d8       coredns-7db6d8ff4d-jgz96
	d7535c8a235c4       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                        8 minutes ago       Running             storage-provisioner       0                   c864a0ebdcd8f       storage-provisioner
	bdb361c9cd9a1       docker.io/kindest/kindnetd@sha256:14100a3a7aca6cad3de3f26ee342ad937ca7d2844b1983d3baa7bf5f491baa7a                      9 minutes ago       Running             kindnet-cni               0                   b671b9e3303ab       kindnet-2f7s4
	82396ebc6d476       2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be                                                        9 minutes ago       Running             kube-proxy                0                   e8ab433d9f148       kube-proxy-qch7m
	137c42a93cc7c       014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd                                                        9 minutes ago       Running             etcd                      0                   43cc279c25a25       etcd-addons-140056
	a58f73816b730       8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a                                                        9 minutes ago       Running             kube-controller-manager   0                   160ad4908df22       kube-controller-manager-addons-140056
	a958daba0b9ba       61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca                                                        9 minutes ago       Running             kube-apiserver            0                   914614ae397d3       kube-apiserver-addons-140056
	54c3777af6f92       d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355                                                        9 minutes ago       Running             kube-scheduler            0                   ceb6154379673       kube-scheduler-addons-140056
	
	
	==> coredns [0c63da520ba2b80db592eab81fc2f6f0721e424d20beb381aa34e4fc2e6cb76c] <==
	[INFO] 10.244.0.14:45959 - 6077 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002961381s
	[INFO] 10.244.0.14:44651 - 14282 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00012755s
	[INFO] 10.244.0.14:44651 - 7112 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000158509s
	[INFO] 10.244.0.14:32895 - 14730 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000134467s
	[INFO] 10.244.0.14:32895 - 31351 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000055927s
	[INFO] 10.244.0.14:48991 - 38530 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000059989s
	[INFO] 10.244.0.14:48991 - 43648 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000050052s
	[INFO] 10.244.0.14:55309 - 3559 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000075529s
	[INFO] 10.244.0.14:55309 - 14565 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000055508s
	[INFO] 10.244.0.14:56101 - 7995 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001773183s
	[INFO] 10.244.0.14:56101 - 581 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.007259772s
	[INFO] 10.244.0.14:40495 - 27364 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000058914s
	[INFO] 10.244.0.14:40495 - 47328 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.0001009s
	[INFO] 10.244.0.19:42441 - 16961 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000158238s
	[INFO] 10.244.0.19:60458 - 16730 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000225135s
	[INFO] 10.244.0.19:34564 - 27988 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000101819s
	[INFO] 10.244.0.19:60295 - 6013 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000090684s
	[INFO] 10.244.0.19:49181 - 41653 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00008764s
	[INFO] 10.244.0.19:55294 - 3538 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00005957s
	[INFO] 10.244.0.19:56344 - 158 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.007564259s
	[INFO] 10.244.0.19:55202 - 25875 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.007938049s
	[INFO] 10.244.0.19:58488 - 61562 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000924779s
	[INFO] 10.244.0.19:53801 - 64116 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.001664144s
	[INFO] 10.244.0.22:40783 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000501914s
	[INFO] 10.244.0.22:45144 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000138693s
	
	
	==> describe nodes <==
	Name:               addons-140056
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-140056
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8c09ef4366b737aec8aaa2bbc590bde27da814a6
	                    minikube.k8s.io/name=addons-140056
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_23T14_28_16_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-140056
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 23 Jul 2024 14:28:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-140056
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 23 Jul 2024 14:37:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 23 Jul 2024 14:35:22 +0000   Tue, 23 Jul 2024 14:28:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 23 Jul 2024 14:35:22 +0000   Tue, 23 Jul 2024 14:28:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 23 Jul 2024 14:35:22 +0000   Tue, 23 Jul 2024 14:28:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 23 Jul 2024 14:35:22 +0000   Tue, 23 Jul 2024 14:29:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-140056
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	System Info:
	  Machine ID:                 4d73b747a75c4370b5d2c406795a0045
	  System UUID:                2cc61e4c-48a4-4fb9-b435-38f736e4329b
	  Boot ID:                    95e04985-bf92-47a1-9b5b-7f09371b9e30
	  Kernel Version:             5.15.0-1065-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-6778b5fc9f-4zn9v         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m48s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m8s
	  gcp-auth                    gcp-auth-5db96cd9b4-b42k7                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m
	  headlamp                    headlamp-7867546754-xc2vd                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m30s
	  kube-system                 coredns-7db6d8ff4d-jgz96                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     9m11s
	  kube-system                 etcd-addons-140056                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         9m25s
	  kube-system                 kindnet-2f7s4                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      9m12s
	  kube-system                 kube-apiserver-addons-140056             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m25s
	  kube-system                 kube-controller-manager-addons-140056    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m26s
	  kube-system                 kube-proxy-qch7m                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m12s
	  kube-system                 kube-scheduler-addons-140056             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m25s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m6s
	  yakd-dashboard              yakd-dashboard-799879c74f-jkkhq          0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (1%!)(MISSING)       256Mi (3%!)(MISSING)     9m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             348Mi (4%!)(MISSING)  476Mi (6%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m5s   kube-proxy       
	  Normal  Starting                 9m25s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m25s  kubelet          Node addons-140056 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m25s  kubelet          Node addons-140056 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m25s  kubelet          Node addons-140056 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m12s  node-controller  Node addons-140056 event: Registered Node addons-140056 in Controller
	  Normal  NodeReady                8m25s  kubelet          Node addons-140056 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.001070] FS-Cache: O-key=[8] '2d713b0000000000'
	[  +0.000720] FS-Cache: N-cookie c=000000d2 [p=000000c9 fl=2 nc=0 na=1]
	[  +0.000975] FS-Cache: N-cookie d=00000000a817b499{9p.inode} n=000000000709a92e
	[  +0.001110] FS-Cache: N-key=[8] '2d713b0000000000'
	[  +0.008114] FS-Cache: Duplicate cookie detected
	[  +0.000738] FS-Cache: O-cookie c=000000cc [p=000000c9 fl=226 nc=0 na=1]
	[  +0.001007] FS-Cache: O-cookie d=00000000a817b499{9p.inode} n=0000000092f01866
	[  +0.001106] FS-Cache: O-key=[8] '2d713b0000000000'
	[  +0.000742] FS-Cache: N-cookie c=000000d3 [p=000000c9 fl=2 nc=0 na=1]
	[  +0.001059] FS-Cache: N-cookie d=00000000a817b499{9p.inode} n=00000000655937a6
	[  +0.001197] FS-Cache: N-key=[8] '2d713b0000000000'
	[  +2.882746] FS-Cache: Duplicate cookie detected
	[  +0.000750] FS-Cache: O-cookie c=000000ca [p=000000c9 fl=226 nc=0 na=1]
	[  +0.001045] FS-Cache: O-cookie d=00000000a817b499{9p.inode} n=000000008eb2f51f
	[  +0.001080] FS-Cache: O-key=[8] '2c713b0000000000'
	[  +0.000745] FS-Cache: N-cookie c=000000d5 [p=000000c9 fl=2 nc=0 na=1]
	[  +0.000961] FS-Cache: N-cookie d=00000000a817b499{9p.inode} n=000000008f7cdf75
	[  +0.001066] FS-Cache: N-key=[8] '2c713b0000000000'
	[  +0.323741] FS-Cache: Duplicate cookie detected
	[  +0.000718] FS-Cache: O-cookie c=000000cf [p=000000c9 fl=226 nc=0 na=1]
	[  +0.001294] FS-Cache: O-cookie d=00000000a817b499{9p.inode} n=00000000897df759
	[  +0.001091] FS-Cache: O-key=[8] '32713b0000000000'
	[  +0.000726] FS-Cache: N-cookie c=000000d6 [p=000000c9 fl=2 nc=0 na=1]
	[  +0.000957] FS-Cache: N-cookie d=00000000a817b499{9p.inode} n=000000001ce9a292
	[  +0.001092] FS-Cache: N-key=[8] '32713b0000000000'
	
	
	==> etcd [137c42a93cc7c36e96ae7e6a68be283b89d997b0b6f8ea281cc17fcd3f3eb8c9] <==
	{"level":"warn","ts":"2024-07-23T14:28:33.517577Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-23T14:28:33.195039Z","time spent":"322.533942ms","remote":"127.0.0.1:45928","response type":"/etcdserverpb.KV/Range","request count":0,"request size":51,"response count":0,"response size":29,"request content":"key:\"/registry/clusterrolebindings/storage-provisioner\" "}
	{"level":"warn","ts":"2024-07-23T14:28:33.517691Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"322.687511ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-140056\" ","response":"range_response_count:1 size:5744"}
	{"level":"info","ts":"2024-07-23T14:28:33.517716Z","caller":"traceutil/trace.go:171","msg":"trace[1035957126] range","detail":"{range_begin:/registry/minions/addons-140056; range_end:; response_count:1; response_revision:378; }","duration":"322.711979ms","start":"2024-07-23T14:28:33.194997Z","end":"2024-07-23T14:28:33.517709Z","steps":["trace[1035957126] 'agreement among raft nodes before linearized reading'  (duration: 231.297148ms)","trace[1035957126] 'get authentication metadata'  (duration: 40.519344ms)","trace[1035957126] 'range keys from in-memory index tree'  (duration: 50.854018ms)"],"step_count":3}
	{"level":"warn","ts":"2024-07-23T14:28:33.517734Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-23T14:28:33.18444Z","time spent":"333.28938ms","remote":"127.0.0.1:45778","response type":"/etcdserverpb.KV/Range","request count":0,"request size":33,"response count":1,"response size":5768,"request content":"key:\"/registry/minions/addons-140056\" "}
	{"level":"warn","ts":"2024-07-23T14:28:33.517831Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"340.132113ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourcequotas/local-path-storage/\" range_end:\"/registry/resourcequotas/local-path-storage0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-23T14:28:33.517856Z","caller":"traceutil/trace.go:171","msg":"trace[2070602465] range","detail":"{range_begin:/registry/resourcequotas/local-path-storage/; range_end:/registry/resourcequotas/local-path-storage0; response_count:0; response_revision:378; }","duration":"340.155383ms","start":"2024-07-23T14:28:33.177692Z","end":"2024-07-23T14:28:33.517847Z","steps":["trace[2070602465] 'agreement among raft nodes before linearized reading'  (duration: 248.605034ms)","trace[2070602465] 'get authentication metadata'  (duration: 40.52238ms)","trace[2070602465] 'range keys from in-memory index tree'  (duration: 51.002582ms)"],"step_count":3}
	{"level":"warn","ts":"2024-07-23T14:28:33.517875Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-23T14:28:33.177679Z","time spent":"340.189394ms","remote":"127.0.0.1:45706","response type":"/etcdserverpb.KV/Range","request count":0,"request size":92,"response count":0,"response size":29,"request content":"key:\"/registry/resourcequotas/local-path-storage/\" range_end:\"/registry/resourcequotas/local-path-storage0\" "}
	{"level":"warn","ts":"2024-07-23T14:28:33.51797Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"340.432039ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/minikube-ingress-dns\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-23T14:28:33.517994Z","caller":"traceutil/trace.go:171","msg":"trace[192733837] range","detail":"{range_begin:/registry/clusterroles/minikube-ingress-dns; range_end:; response_count:0; response_revision:378; }","duration":"340.455448ms","start":"2024-07-23T14:28:33.177531Z","end":"2024-07-23T14:28:33.517987Z","steps":["trace[192733837] 'agreement among raft nodes before linearized reading'  (duration: 248.76813ms)","trace[192733837] 'get authentication metadata'  (duration: 40.524891ms)","trace[192733837] 'range keys from in-memory index tree'  (duration: 51.136999ms)"],"step_count":3}
	{"level":"warn","ts":"2024-07-23T14:28:33.518011Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-23T14:28:33.1775Z","time spent":"340.505886ms","remote":"127.0.0.1:45920","response type":"/etcdserverpb.KV/Range","request count":0,"request size":45,"response count":0,"response size":29,"request content":"key:\"/registry/clusterroles/minikube-ingress-dns\" "}
	{"level":"warn","ts":"2024-07-23T14:28:33.534185Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"393.523729ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ranges/serviceips\" ","response":"range_response_count:1 size:116"}
	{"level":"info","ts":"2024-07-23T14:28:33.534978Z","caller":"traceutil/trace.go:171","msg":"trace[1309546410] range","detail":"{range_begin:/registry/ranges/serviceips; range_end:; response_count:1; response_revision:378; }","duration":"394.570692ms","start":"2024-07-23T14:28:33.140386Z","end":"2024-07-23T14:28:33.534957Z","steps":["trace[1309546410] 'agreement among raft nodes before linearized reading'  (duration: 285.915848ms)","trace[1309546410] 'get authentication metadata'  (duration: 40.527147ms)","trace[1309546410] 'range keys from in-memory index tree'  (duration: 67.057866ms)"],"step_count":3}
	{"level":"warn","ts":"2024-07-23T14:28:33.549499Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-23T14:28:33.140372Z","time spent":"409.067647ms","remote":"127.0.0.1:45676","response type":"/etcdserverpb.KV/Range","request count":0,"request size":29,"response count":1,"response size":140,"request content":"key:\"/registry/ranges/serviceips\" "}
	{"level":"warn","ts":"2024-07-23T14:28:33.549764Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"409.680274ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/gadget\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-23T14:28:33.549821Z","caller":"traceutil/trace.go:171","msg":"trace[707520520] range","detail":"{range_begin:/registry/namespaces/gadget; range_end:; response_count:0; response_revision:378; }","duration":"409.739204ms","start":"2024-07-23T14:28:33.140068Z","end":"2024-07-23T14:28:33.549807Z","steps":["trace[707520520] 'agreement among raft nodes before linearized reading'  (duration: 286.236196ms)","trace[707520520] 'get authentication metadata'  (duration: 40.540702ms)","trace[707520520] 'range keys from in-memory index tree'  (duration: 82.89408ms)"],"step_count":3}
	{"level":"warn","ts":"2024-07-23T14:28:33.549845Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-23T14:28:33.14003Z","time spent":"409.810401ms","remote":"127.0.0.1:45730","response type":"/etcdserverpb.KV/Range","request count":0,"request size":29,"response count":0,"response size":29,"request content":"key:\"/registry/namespaces/gadget\" "}
	{"level":"info","ts":"2024-07-23T14:28:33.550585Z","caller":"traceutil/trace.go:171","msg":"trace[1112847054] transaction","detail":"{read_only:false; response_revision:379; number_of_response:1; }","duration":"145.168052ms","start":"2024-07-23T14:28:33.405406Z","end":"2024-07-23T14:28:33.550574Z","steps":["trace[1112847054] 'process raft request'  (duration: 65.881237ms)","trace[1112847054] 'compare'  (duration: 40.669967ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-23T14:28:33.788267Z","caller":"traceutil/trace.go:171","msg":"trace[1084376016] linearizableReadLoop","detail":"{readStateIndex:399; appliedIndex:398; }","duration":"116.889816ms","start":"2024-07-23T14:28:33.671361Z","end":"2024-07-23T14:28:33.788251Z","steps":["trace[1084376016] 'read index received'  (duration: 70.288691ms)","trace[1084376016] 'applied index is now lower than readState.Index'  (duration: 46.600567ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-23T14:28:33.788735Z","caller":"traceutil/trace.go:171","msg":"trace[1989678502] transaction","detail":"{read_only:false; response_revision:387; number_of_response:1; }","duration":"119.205454ms","start":"2024-07-23T14:28:33.669516Z","end":"2024-07-23T14:28:33.788722Z","steps":["trace[1989678502] 'process raft request'  (duration: 72.223754ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-23T14:28:33.789069Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"117.688315ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-07-23T14:28:33.789141Z","caller":"traceutil/trace.go:171","msg":"trace[1274984649] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:387; }","duration":"117.775751ms","start":"2024-07-23T14:28:33.671357Z","end":"2024-07-23T14:28:33.789133Z","steps":["trace[1274984649] 'agreement among raft nodes before linearized reading'  (duration: 117.620861ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-23T14:28:33.793176Z","caller":"traceutil/trace.go:171","msg":"trace[137792574] transaction","detail":"{read_only:false; response_revision:388; number_of_response:1; }","duration":"121.448591ms","start":"2024-07-23T14:28:33.671712Z","end":"2024-07-23T14:28:33.793161Z","steps":["trace[137792574] 'process raft request'  (duration: 116.495735ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-23T14:28:33.811111Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"139.317313ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourcequotas/gadget/\" range_end:\"/registry/resourcequotas/gadget0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-23T14:28:33.811335Z","caller":"traceutil/trace.go:171","msg":"trace[783303961] range","detail":"{range_begin:/registry/resourcequotas/gadget/; range_end:/registry/resourcequotas/gadget0; response_count:0; response_revision:391; }","duration":"139.54994ms","start":"2024-07-23T14:28:33.671771Z","end":"2024-07-23T14:28:33.811321Z","steps":["trace[783303961] 'agreement among raft nodes before linearized reading'  (duration: 139.26329ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-23T14:28:33.811647Z","caller":"traceutil/trace.go:171","msg":"trace[1273456742] transaction","detail":"{read_only:false; response_revision:389; number_of_response:1; }","duration":"139.727542ms","start":"2024-07-23T14:28:33.671885Z","end":"2024-07-23T14:28:33.811612Z","steps":["trace[1273456742] 'process raft request'  (duration: 121.267305ms)","trace[1273456742] 'compare'  (duration: 17.678566ms)"],"step_count":2}
	
	
	==> gcp-auth [6bdf0ac15bdda759dbe5d8fd617a965b1af7a9de1f0cda30b2720d03bca35ce9] <==
	2024/07/23 14:30:10 GCP Auth Webhook started!
	2024/07/23 14:31:10 Ready to marshal response ...
	2024/07/23 14:31:10 Ready to write response ...
	2024/07/23 14:31:10 Ready to marshal response ...
	2024/07/23 14:31:10 Ready to write response ...
	2024/07/23 14:31:10 Ready to marshal response ...
	2024/07/23 14:31:10 Ready to write response ...
	2024/07/23 14:31:21 Ready to marshal response ...
	2024/07/23 14:31:21 Ready to write response ...
	2024/07/23 14:31:26 Ready to marshal response ...
	2024/07/23 14:31:26 Ready to write response ...
	2024/07/23 14:31:26 Ready to marshal response ...
	2024/07/23 14:31:26 Ready to write response ...
	2024/07/23 14:31:40 Ready to marshal response ...
	2024/07/23 14:31:40 Ready to write response ...
	2024/07/23 14:31:48 Ready to marshal response ...
	2024/07/23 14:31:48 Ready to write response ...
	2024/07/23 14:31:56 Ready to marshal response ...
	2024/07/23 14:31:56 Ready to write response ...
	2024/07/23 14:32:32 Ready to marshal response ...
	2024/07/23 14:32:32 Ready to write response ...
	2024/07/23 14:34:52 Ready to marshal response ...
	2024/07/23 14:34:52 Ready to write response ...
	
	
	==> kernel <==
	 14:37:40 up 23:20,  0 users,  load average: 0.02, 0.62, 1.52
	Linux addons-140056 5.15.0-1065-aws #71~20.04.1-Ubuntu SMP Fri Jun 28 19:59:49 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [bdb361c9cd9a14e56cb6d358f6ab2176e483982c5537b445da9ad7f02031b57f] <==
	E0723 14:36:16.043672       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0723 14:36:24.711410       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0723 14:36:24.711451       1 main.go:299] handling current node
	W0723 14:36:26.363797       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0723 14:36:26.363905       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0723 14:36:34.712031       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0723 14:36:34.712067       1 main.go:299] handling current node
	I0723 14:36:44.711264       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0723 14:36:44.711295       1 main.go:299] handling current node
	I0723 14:36:54.711822       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0723 14:36:54.711868       1 main.go:299] handling current node
	W0723 14:36:56.663189       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0723 14:36:56.663235       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	W0723 14:37:03.378296       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0723 14:37:03.378330       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0723 14:37:04.711923       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0723 14:37:04.711965       1 main.go:299] handling current node
	W0723 14:37:13.076411       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0723 14:37:13.076480       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0723 14:37:14.711798       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0723 14:37:14.711836       1 main.go:299] handling current node
	I0723 14:37:24.711755       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0723 14:37:24.711790       1 main.go:299] handling current node
	I0723 14:37:34.711273       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0723 14:37:34.711305       1 main.go:299] handling current node
	
	
	==> kube-apiserver [a958daba0b9ba0c8d9d5c5311f8098a5bc3c2438bfa54b6e90f10aa09e37fd91] <==
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0723 14:30:40.643768       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0723 14:30:40.659160       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object has been modified; please apply your changes to the latest version and try again
	I0723 14:31:10.865195       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.101.151.250"}
	I0723 14:31:52.119881       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0723 14:32:04.961398       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0723 14:32:11.293469       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0723 14:32:11.293610       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0723 14:32:11.330391       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0723 14:32:11.330447       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0723 14:32:11.351168       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0723 14:32:11.351210       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0723 14:32:11.351517       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0723 14:32:11.351550       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0723 14:32:11.442027       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0723 14:32:11.442188       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0723 14:32:12.352097       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0723 14:32:12.443078       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0723 14:32:12.460658       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0723 14:32:18.104625       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0723 14:32:19.148375       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0723 14:32:32.567536       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0723 14:32:32.900794       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.103.2.199"}
	I0723 14:34:52.485161       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.104.158.85"}
	E0723 14:34:54.125414       1 watch.go:250] http2: stream closed
	
	
	==> kube-controller-manager [a58f73816b73096e221fa39bf12174e422e029bcf2493236fd089866bc393967] <==
	W0723 14:35:21.220966       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0723 14:35:21.221005       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0723 14:35:36.927248       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0723 14:35:36.927288       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0723 14:35:56.651460       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0723 14:35:56.651497       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0723 14:36:13.516038       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0723 14:36:13.516077       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0723 14:36:13.533654       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0723 14:36:13.533695       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0723 14:36:27.935577       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0723 14:36:27.935700       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0723 14:36:44.141991       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0723 14:36:44.142028       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0723 14:36:46.597138       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0723 14:36:46.597175       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0723 14:36:51.638613       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0723 14:36:51.638665       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0723 14:37:23.045402       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0723 14:37:23.045440       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0723 14:37:24.267533       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0723 14:37:24.267573       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0723 14:37:31.781465       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0723 14:37:31.781506       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0723 14:37:38.558230       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-c59844bb4" duration="7.475µs"
	
	
	==> kube-proxy [82396ebc6d4766b0d7b5d3fd339d8d061a527ae22c008086b632902c95850437] <==
	I0723 14:28:34.497062       1 server_linux.go:69] "Using iptables proxy"
	I0723 14:28:34.642144       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0723 14:28:35.096609       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0723 14:28:35.096734       1 server_linux.go:165] "Using iptables Proxier"
	I0723 14:28:35.099079       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0723 14:28:35.099178       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0723 14:28:35.099228       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0723 14:28:35.099473       1 server.go:872] "Version info" version="v1.30.3"
	I0723 14:28:35.100177       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0723 14:28:35.101784       1 config.go:192] "Starting service config controller"
	I0723 14:28:35.103914       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0723 14:28:35.104029       1 config.go:101] "Starting endpoint slice config controller"
	I0723 14:28:35.104060       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0723 14:28:35.104570       1 config.go:319] "Starting node config controller"
	I0723 14:28:35.104621       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0723 14:28:35.205757       1 shared_informer.go:320] Caches are synced for node config
	I0723 14:28:35.205874       1 shared_informer.go:320] Caches are synced for service config
	I0723 14:28:35.205967       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [54c3777af6f927748ddcbd45b75cad5d3d1a83ae959204c489cb9fd6611b442d] <==
	W0723 14:28:12.775403       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0723 14:28:12.775456       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0723 14:28:12.775548       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0723 14:28:12.775586       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0723 14:28:12.776652       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0723 14:28:12.776728       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0723 14:28:13.644066       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0723 14:28:13.644106       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0723 14:28:13.652350       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0723 14:28:13.652467       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0723 14:28:13.710337       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0723 14:28:13.710445       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0723 14:28:13.746321       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0723 14:28:13.746447       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0723 14:28:13.819200       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0723 14:28:13.819332       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0723 14:28:13.843767       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0723 14:28:13.843882       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0723 14:28:13.885360       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0723 14:28:13.885474       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0723 14:28:13.943638       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0723 14:28:13.943760       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0723 14:28:14.167465       1 reflector.go:547] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0723 14:28:14.167591       1 reflector.go:150] runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0723 14:28:15.853918       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 23 14:34:55 addons-140056 kubelet[1548]: I0723 14:34:55.185845    1548 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f19d23b6-9b9b-4771-aeaf-40a41665b578" path="/var/lib/kubelet/pods/f19d23b6-9b9b-4771-aeaf-40a41665b578/volumes"
	Jul 23 14:34:55 addons-140056 kubelet[1548]: I0723 14:34:55.186176    1548 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc6d5c36-cff9-498c-811a-2240804cadf4" path="/var/lib/kubelet/pods/fc6d5c36-cff9-498c-811a-2240804cadf4/volumes"
	Jul 23 14:34:58 addons-140056 kubelet[1548]: I0723 14:34:58.023886    1548 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a96d060b-4b9d-411e-9243-066225274171-webhook-cert\") pod \"a96d060b-4b9d-411e-9243-066225274171\" (UID: \"a96d060b-4b9d-411e-9243-066225274171\") "
	Jul 23 14:34:58 addons-140056 kubelet[1548]: I0723 14:34:58.023961    1548 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4dqz\" (UniqueName: \"kubernetes.io/projected/a96d060b-4b9d-411e-9243-066225274171-kube-api-access-b4dqz\") pod \"a96d060b-4b9d-411e-9243-066225274171\" (UID: \"a96d060b-4b9d-411e-9243-066225274171\") "
	Jul 23 14:34:58 addons-140056 kubelet[1548]: I0723 14:34:58.026359    1548 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a96d060b-4b9d-411e-9243-066225274171-kube-api-access-b4dqz" (OuterVolumeSpecName: "kube-api-access-b4dqz") pod "a96d060b-4b9d-411e-9243-066225274171" (UID: "a96d060b-4b9d-411e-9243-066225274171"). InnerVolumeSpecName "kube-api-access-b4dqz". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 23 14:34:58 addons-140056 kubelet[1548]: I0723 14:34:58.029292    1548 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a96d060b-4b9d-411e-9243-066225274171-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "a96d060b-4b9d-411e-9243-066225274171" (UID: "a96d060b-4b9d-411e-9243-066225274171"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 23 14:34:58 addons-140056 kubelet[1548]: I0723 14:34:58.089875    1548 scope.go:117] "RemoveContainer" containerID="2ab9cb1b6a6e93dfe4f80d4c8da60bf7b3fcd755d45285bfc12df244cf9e0c03"
	Jul 23 14:34:58 addons-140056 kubelet[1548]: I0723 14:34:58.106114    1548 scope.go:117] "RemoveContainer" containerID="2ab9cb1b6a6e93dfe4f80d4c8da60bf7b3fcd755d45285bfc12df244cf9e0c03"
	Jul 23 14:34:58 addons-140056 kubelet[1548]: E0723 14:34:58.106508    1548 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ab9cb1b6a6e93dfe4f80d4c8da60bf7b3fcd755d45285bfc12df244cf9e0c03\": container with ID starting with 2ab9cb1b6a6e93dfe4f80d4c8da60bf7b3fcd755d45285bfc12df244cf9e0c03 not found: ID does not exist" containerID="2ab9cb1b6a6e93dfe4f80d4c8da60bf7b3fcd755d45285bfc12df244cf9e0c03"
	Jul 23 14:34:58 addons-140056 kubelet[1548]: I0723 14:34:58.106606    1548 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ab9cb1b6a6e93dfe4f80d4c8da60bf7b3fcd755d45285bfc12df244cf9e0c03"} err="failed to get container status \"2ab9cb1b6a6e93dfe4f80d4c8da60bf7b3fcd755d45285bfc12df244cf9e0c03\": rpc error: code = NotFound desc = could not find container \"2ab9cb1b6a6e93dfe4f80d4c8da60bf7b3fcd755d45285bfc12df244cf9e0c03\": container with ID starting with 2ab9cb1b6a6e93dfe4f80d4c8da60bf7b3fcd755d45285bfc12df244cf9e0c03 not found: ID does not exist"
	Jul 23 14:34:58 addons-140056 kubelet[1548]: I0723 14:34:58.125054    1548 reconciler_common.go:289] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a96d060b-4b9d-411e-9243-066225274171-webhook-cert\") on node \"addons-140056\" DevicePath \"\""
	Jul 23 14:34:58 addons-140056 kubelet[1548]: I0723 14:34:58.125087    1548 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-b4dqz\" (UniqueName: \"kubernetes.io/projected/a96d060b-4b9d-411e-9243-066225274171-kube-api-access-b4dqz\") on node \"addons-140056\" DevicePath \"\""
	Jul 23 14:34:59 addons-140056 kubelet[1548]: I0723 14:34:59.185859    1548 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a96d060b-4b9d-411e-9243-066225274171" path="/var/lib/kubelet/pods/a96d060b-4b9d-411e-9243-066225274171/volumes"
	Jul 23 14:35:15 addons-140056 kubelet[1548]: I0723 14:35:15.574796    1548 scope.go:117] "RemoveContainer" containerID="14bde39b359153029d3fcace4c5d42045f76118382ce8738771bbe8442d2ef14"
	Jul 23 14:35:15 addons-140056 kubelet[1548]: I0723 14:35:15.592985    1548 scope.go:117] "RemoveContainer" containerID="9bf4698e1ef672f3f6e6ce8160379f784e70470830e1f43e417cddf9e252fb01"
	Jul 23 14:37:39 addons-140056 kubelet[1548]: I0723 14:37:39.966805    1548 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qm8z9\" (UniqueName: \"kubernetes.io/projected/624cee58-45f6-4199-bfae-0fb883077e3f-kube-api-access-qm8z9\") pod \"624cee58-45f6-4199-bfae-0fb883077e3f\" (UID: \"624cee58-45f6-4199-bfae-0fb883077e3f\") "
	Jul 23 14:37:39 addons-140056 kubelet[1548]: I0723 14:37:39.966866    1548 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/624cee58-45f6-4199-bfae-0fb883077e3f-tmp-dir\") pod \"624cee58-45f6-4199-bfae-0fb883077e3f\" (UID: \"624cee58-45f6-4199-bfae-0fb883077e3f\") "
	Jul 23 14:37:39 addons-140056 kubelet[1548]: I0723 14:37:39.967407    1548 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/624cee58-45f6-4199-bfae-0fb883077e3f-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "624cee58-45f6-4199-bfae-0fb883077e3f" (UID: "624cee58-45f6-4199-bfae-0fb883077e3f"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Jul 23 14:37:39 addons-140056 kubelet[1548]: I0723 14:37:39.973701    1548 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/624cee58-45f6-4199-bfae-0fb883077e3f-kube-api-access-qm8z9" (OuterVolumeSpecName: "kube-api-access-qm8z9") pod "624cee58-45f6-4199-bfae-0fb883077e3f" (UID: "624cee58-45f6-4199-bfae-0fb883077e3f"). InnerVolumeSpecName "kube-api-access-qm8z9". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 23 14:37:40 addons-140056 kubelet[1548]: I0723 14:37:40.067611    1548 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-qm8z9\" (UniqueName: \"kubernetes.io/projected/624cee58-45f6-4199-bfae-0fb883077e3f-kube-api-access-qm8z9\") on node \"addons-140056\" DevicePath \"\""
	Jul 23 14:37:40 addons-140056 kubelet[1548]: I0723 14:37:40.067655    1548 reconciler_common.go:289] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/624cee58-45f6-4199-bfae-0fb883077e3f-tmp-dir\") on node \"addons-140056\" DevicePath \"\""
	Jul 23 14:37:40 addons-140056 kubelet[1548]: I0723 14:37:40.414871    1548 scope.go:117] "RemoveContainer" containerID="3019c06c5c17140bbea39ca7d92cf2beaf912feb205ae022abf1897ae8a5a3c4"
	Jul 23 14:37:40 addons-140056 kubelet[1548]: I0723 14:37:40.453481    1548 scope.go:117] "RemoveContainer" containerID="3019c06c5c17140bbea39ca7d92cf2beaf912feb205ae022abf1897ae8a5a3c4"
	Jul 23 14:37:40 addons-140056 kubelet[1548]: E0723 14:37:40.453922    1548 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3019c06c5c17140bbea39ca7d92cf2beaf912feb205ae022abf1897ae8a5a3c4\": container with ID starting with 3019c06c5c17140bbea39ca7d92cf2beaf912feb205ae022abf1897ae8a5a3c4 not found: ID does not exist" containerID="3019c06c5c17140bbea39ca7d92cf2beaf912feb205ae022abf1897ae8a5a3c4"
	Jul 23 14:37:40 addons-140056 kubelet[1548]: I0723 14:37:40.453980    1548 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3019c06c5c17140bbea39ca7d92cf2beaf912feb205ae022abf1897ae8a5a3c4"} err="failed to get container status \"3019c06c5c17140bbea39ca7d92cf2beaf912feb205ae022abf1897ae8a5a3c4\": rpc error: code = NotFound desc = could not find container \"3019c06c5c17140bbea39ca7d92cf2beaf912feb205ae022abf1897ae8a5a3c4\": container with ID starting with 3019c06c5c17140bbea39ca7d92cf2beaf912feb205ae022abf1897ae8a5a3c4 not found: ID does not exist"
	
	
	==> storage-provisioner [d7535c8a235c47e1aa307559567967c7bd5c1404f060448c5dada7cf0456bd1d] <==
	I0723 14:29:16.022497       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0723 14:29:16.036719       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0723 14:29:16.038040       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0723 14:29:16.053114       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0723 14:29:16.053383       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-140056_ae590010-2694-4b97-853e-4227fb1b1c3c!
	I0723 14:29:16.054350       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"24be5b02-8199-4cac-8c8b-78a4be38111e", APIVersion:"v1", ResourceVersion:"898", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-140056_ae590010-2694-4b97-853e-4227fb1b1c3c became leader
	I0723 14:29:16.154627       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-140056_ae590010-2694-4b97-853e-4227fb1b1c3c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-140056 -n addons-140056
helpers_test.go:261: (dbg) Run:  kubectl --context addons-140056 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (317.84s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (201.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [71112316-b0c5-44bc-981e-ea11ed624421] Running
E0723 14:41:20.267877 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/client.crt: no such file or directory
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004366306s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-054469 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-054469 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-054469 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-054469 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [94b33608-40e8-4ee7-84b0-e3f5b11ea400] Pending
helpers_test.go:344: "sp-pod" [94b33608-40e8-4ee7-84b0-e3f5b11ea400] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [94b33608-40e8-4ee7-84b0-e3f5b11ea400] Running
E0723 14:41:30.508093 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/client.crt: no such file or directory
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.00365532s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-054469 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-054469 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-054469 delete -f testdata/storage-provisioner/pod.yaml: (1.299506121s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-054469 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [296b44b0-dfed-4eee-b2fe-faa14e3288ce] Pending
helpers_test.go:344: "sp-pod" [296b44b0-dfed-4eee-b2fe-faa14e3288ce] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "default" "test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:130: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 3m0s: context deadline exceeded ****
functional_test_pvc_test.go:130: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-054469 -n functional-054469
functional_test_pvc_test.go:130: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2024-07-23 14:44:36.12594714 +0000 UTC m=+1055.944861357
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-054469 describe po sp-pod -n default
functional_test_pvc_test.go:130: (dbg) kubectl --context functional-054469 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-054469/192.168.49.2
Start Time:       Tue, 23 Jul 2024 14:41:35 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:  10.244.0.7
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6rmmg (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-6rmmg:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  3m                    default-scheduler  Successfully assigned default/sp-pod to functional-054469
Warning  Failed     102s (x2 over 2m30s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning  Failed     46s (x3 over 2m30s)   kubelet            Error: ErrImagePull
Warning  Failed     46s                   kubelet            Failed to pull image "docker.io/nginx": loading manifest for target platform: reading manifest sha256:9a3f8e8b2777851f98c569c91f8ebd6f21b0af188c245c38a0179086bb27782e in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Normal   BackOff    17s (x4 over 2m30s)   kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     17s (x4 over 2m30s)   kubelet            Error: ImagePullBackOff
Normal   Pulling    4s (x4 over 3m)       kubelet            Pulling image "docker.io/nginx"
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-054469 logs sp-pod -n default
functional_test_pvc_test.go:130: (dbg) Non-zero exit: kubectl --context functional-054469 logs sp-pod -n default: exit status 1 (97.350888ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_pvc_test.go:130: kubectl --context functional-054469 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:131: failed waiting for pod: test=storage-provisioner within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-054469
helpers_test.go:235: (dbg) docker inspect functional-054469:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6c28197146e688a4d4c1cf4617a70ddb4cfe1bb59119c6f79a42e70f4ffe0d1d",
	        "Created": "2024-07-23T14:39:02.505329963Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 3340475,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-07-23T14:39:02.672322509Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:71a7ac3dcc1f66f9b927c200bbaca5de093c77584a8e2cceb20f7c37b7028780",
	        "ResolvConfPath": "/var/lib/docker/containers/6c28197146e688a4d4c1cf4617a70ddb4cfe1bb59119c6f79a42e70f4ffe0d1d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6c28197146e688a4d4c1cf4617a70ddb4cfe1bb59119c6f79a42e70f4ffe0d1d/hostname",
	        "HostsPath": "/var/lib/docker/containers/6c28197146e688a4d4c1cf4617a70ddb4cfe1bb59119c6f79a42e70f4ffe0d1d/hosts",
	        "LogPath": "/var/lib/docker/containers/6c28197146e688a4d4c1cf4617a70ddb4cfe1bb59119c6f79a42e70f4ffe0d1d/6c28197146e688a4d4c1cf4617a70ddb4cfe1bb59119c6f79a42e70f4ffe0d1d-json.log",
	        "Name": "/functional-054469",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-054469:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-054469",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/0bae23b51c081a53a4c6308eec582772f1efe4e1e475102e9d0d41d3574223ac-init/diff:/var/lib/docker/overlay2/cc3f8b49bb50b989dafe94ead705091dcc80edbdd409e161d5028bc93b57b742/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0bae23b51c081a53a4c6308eec582772f1efe4e1e475102e9d0d41d3574223ac/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0bae23b51c081a53a4c6308eec582772f1efe4e1e475102e9d0d41d3574223ac/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0bae23b51c081a53a4c6308eec582772f1efe4e1e475102e9d0d41d3574223ac/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-054469",
	                "Source": "/var/lib/docker/volumes/functional-054469/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-054469",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-054469",
	                "name.minikube.sigs.k8s.io": "functional-054469",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b472bb6a8d8e66a5e644e473a9c4567a849f6d11318babdce462641c56fe04be",
	            "SandboxKey": "/var/run/docker/netns/b472bb6a8d8e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "37162"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "37163"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "37166"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "37164"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "37165"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-054469": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "0d1e9185cf78217f90b27ed91fe891751a8c1a4264798c7982005e2a080c34ef",
	                    "EndpointID": "317f6d36b2d46a8286855e43553be29a1bc977a66e20cafb347bb02de7bb4b22",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-054469",
	                        "6c28197146e6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-054469 -n functional-054469
helpers_test.go:244: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p functional-054469 logs -n 25: (1.704948703s)
helpers_test.go:252: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	|----------------|----------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                 Args                                 |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|----------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| image          | functional-054469 image load --daemon                                | functional-054469 | jenkins | v1.33.1 | 23 Jul 24 14:42 UTC | 23 Jul 24 14:42 UTC |
	|                | docker.io/kicbase/echo-server:functional-054469                      |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                    |                   |         |         |                     |                     |
	| image          | functional-054469 image ls                                           | functional-054469 | jenkins | v1.33.1 | 23 Jul 24 14:42 UTC | 23 Jul 24 14:42 UTC |
	| image          | functional-054469 image save                                         | functional-054469 | jenkins | v1.33.1 | 23 Jul 24 14:42 UTC | 23 Jul 24 14:42 UTC |
	|                | docker.io/kicbase/echo-server:functional-054469                      |                   |         |         |                     |                     |
	|                | /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                    |                   |         |         |                     |                     |
	| image          | functional-054469 image rm                                           | functional-054469 | jenkins | v1.33.1 | 23 Jul 24 14:42 UTC | 23 Jul 24 14:42 UTC |
	|                | docker.io/kicbase/echo-server:functional-054469                      |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                    |                   |         |         |                     |                     |
	| image          | functional-054469 image ls                                           | functional-054469 | jenkins | v1.33.1 | 23 Jul 24 14:42 UTC | 23 Jul 24 14:42 UTC |
	| image          | functional-054469 image load                                         | functional-054469 | jenkins | v1.33.1 | 23 Jul 24 14:42 UTC | 23 Jul 24 14:42 UTC |
	|                | /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                    |                   |         |         |                     |                     |
	| image          | functional-054469 image ls                                           | functional-054469 | jenkins | v1.33.1 | 23 Jul 24 14:42 UTC | 23 Jul 24 14:42 UTC |
	| image          | functional-054469 image save --daemon                                | functional-054469 | jenkins | v1.33.1 | 23 Jul 24 14:42 UTC | 23 Jul 24 14:42 UTC |
	|                | docker.io/kicbase/echo-server:functional-054469                      |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                    |                   |         |         |                     |                     |
	| ssh            | functional-054469 ssh sudo cat                                       | functional-054469 | jenkins | v1.33.1 | 23 Jul 24 14:42 UTC | 23 Jul 24 14:42 UTC |
	|                | /etc/test/nested/copy/3323080/hosts                                  |                   |         |         |                     |                     |
	| ssh            | functional-054469 ssh sudo cat                                       | functional-054469 | jenkins | v1.33.1 | 23 Jul 24 14:42 UTC | 23 Jul 24 14:42 UTC |
	|                | /etc/ssl/certs/3323080.pem                                           |                   |         |         |                     |                     |
	| ssh            | functional-054469 ssh sudo cat                                       | functional-054469 | jenkins | v1.33.1 | 23 Jul 24 14:42 UTC | 23 Jul 24 14:42 UTC |
	|                | /usr/share/ca-certificates/3323080.pem                               |                   |         |         |                     |                     |
	| ssh            | functional-054469 ssh sudo cat                                       | functional-054469 | jenkins | v1.33.1 | 23 Jul 24 14:42 UTC | 23 Jul 24 14:42 UTC |
	|                | /etc/ssl/certs/51391683.0                                            |                   |         |         |                     |                     |
	| ssh            | functional-054469 ssh sudo cat                                       | functional-054469 | jenkins | v1.33.1 | 23 Jul 24 14:42 UTC | 23 Jul 24 14:42 UTC |
	|                | /etc/ssl/certs/33230802.pem                                          |                   |         |         |                     |                     |
	| ssh            | functional-054469 ssh sudo cat                                       | functional-054469 | jenkins | v1.33.1 | 23 Jul 24 14:42 UTC | 23 Jul 24 14:42 UTC |
	|                | /usr/share/ca-certificates/33230802.pem                              |                   |         |         |                     |                     |
	| ssh            | functional-054469 ssh sudo cat                                       | functional-054469 | jenkins | v1.33.1 | 23 Jul 24 14:42 UTC | 23 Jul 24 14:42 UTC |
	|                | /etc/ssl/certs/3ec20f2e.0                                            |                   |         |         |                     |                     |
	| image          | functional-054469                                                    | functional-054469 | jenkins | v1.33.1 | 23 Jul 24 14:42 UTC | 23 Jul 24 14:42 UTC |
	|                | image ls --format short                                              |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                    |                   |         |         |                     |                     |
	| image          | functional-054469                                                    | functional-054469 | jenkins | v1.33.1 | 23 Jul 24 14:42 UTC | 23 Jul 24 14:42 UTC |
	|                | image ls --format yaml                                               |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                    |                   |         |         |                     |                     |
	| ssh            | functional-054469 ssh pgrep                                          | functional-054469 | jenkins | v1.33.1 | 23 Jul 24 14:42 UTC |                     |
	|                | buildkitd                                                            |                   |         |         |                     |                     |
	| image          | functional-054469 image build -t                                     | functional-054469 | jenkins | v1.33.1 | 23 Jul 24 14:42 UTC | 23 Jul 24 14:42 UTC |
	|                | localhost/my-image:functional-054469                                 |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                     |                   |         |         |                     |                     |
	| image          | functional-054469 image ls                                           | functional-054469 | jenkins | v1.33.1 | 23 Jul 24 14:42 UTC | 23 Jul 24 14:42 UTC |
	| image          | functional-054469                                                    | functional-054469 | jenkins | v1.33.1 | 23 Jul 24 14:42 UTC | 23 Jul 24 14:42 UTC |
	|                | image ls --format json                                               |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                    |                   |         |         |                     |                     |
	| image          | functional-054469                                                    | functional-054469 | jenkins | v1.33.1 | 23 Jul 24 14:42 UTC | 23 Jul 24 14:42 UTC |
	|                | image ls --format table                                              |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                    |                   |         |         |                     |                     |
	| update-context | functional-054469                                                    | functional-054469 | jenkins | v1.33.1 | 23 Jul 24 14:42 UTC | 23 Jul 24 14:42 UTC |
	|                | update-context                                                       |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                               |                   |         |         |                     |                     |
	| update-context | functional-054469                                                    | functional-054469 | jenkins | v1.33.1 | 23 Jul 24 14:42 UTC | 23 Jul 24 14:42 UTC |
	|                | update-context                                                       |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                               |                   |         |         |                     |                     |
	| update-context | functional-054469                                                    | functional-054469 | jenkins | v1.33.1 | 23 Jul 24 14:42 UTC | 23 Jul 24 14:42 UTC |
	|                | update-context                                                       |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                               |                   |         |         |                     |                     |
	|----------------|----------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/23 14:42:16
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0723 14:42:16.703083 3350990 out.go:291] Setting OutFile to fd 1 ...
	I0723 14:42:16.703649 3350990 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 14:42:16.703684 3350990 out.go:304] Setting ErrFile to fd 2...
	I0723 14:42:16.703703 3350990 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 14:42:16.703977 3350990 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19319-3317687/.minikube/bin
	I0723 14:42:16.704395 3350990 out.go:298] Setting JSON to false
	I0723 14:42:16.705395 3350990 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":84283,"bootTime":1721661454,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0723 14:42:16.705491 3350990 start.go:139] virtualization:  
	I0723 14:42:16.708023 3350990 out.go:177] * [functional-054469] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0723 14:42:16.710944 3350990 out.go:177]   - MINIKUBE_LOCATION=19319
	I0723 14:42:16.711072 3350990 notify.go:220] Checking for updates...
	I0723 14:42:16.714475 3350990 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0723 14:42:16.716575 3350990 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19319-3317687/kubeconfig
	I0723 14:42:16.718181 3350990 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19319-3317687/.minikube
	I0723 14:42:16.720165 3350990 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0723 14:42:16.721734 3350990 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0723 14:42:16.723977 3350990 config.go:182] Loaded profile config "functional-054469": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 14:42:16.724624 3350990 driver.go:392] Setting default libvirt URI to qemu:///system
	I0723 14:42:16.748774 3350990 docker.go:123] docker version: linux-27.1.0:Docker Engine - Community
	I0723 14:42:16.748897 3350990 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0723 14:42:16.820464 3350990 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-07-23 14:42:16.807295142 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1065-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.0]] Warnings:<nil>}}
	I0723 14:42:16.820569 3350990 docker.go:307] overlay module found
	I0723 14:42:16.822775 3350990 out.go:177] * Using the docker driver based on existing profile
	I0723 14:42:16.824485 3350990 start.go:297] selected driver: docker
	I0723 14:42:16.824503 3350990 start.go:901] validating driver "docker" against &{Name:functional-054469 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-054469 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 14:42:16.824626 3350990 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0723 14:42:16.824730 3350990 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0723 14:42:16.877719 3350990 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-07-23 14:42:16.86874866 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1065-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.0]] Warnings:<nil>}}
	I0723 14:42:16.878131 3350990 cni.go:84] Creating CNI manager for ""
	I0723 14:42:16.878147 3350990 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0723 14:42:16.878209 3350990 start.go:340] cluster config:
	{Name:functional-054469 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-054469 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 14:42:16.881579 3350990 out.go:177] * dry-run validation complete!
	
	
	==> CRI-O <==
	Jul 23 14:42:24 functional-054469 crio[4126]: time="2024-07-23 14:42:24.559160578Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a,RepoTags:[],RepoDigests:[docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a],Size_:42263767,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=265905ec-bb26-41cc-a867-4d5d8614f3a5 name=/runtime.v1.ImageService/ImageStatus
	Jul 23 14:42:24 functional-054469 crio[4126]: time="2024-07-23 14:42:24.560221559Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67-9t69t/dashboard-metrics-scraper" id=5f64c5cd-102e-4578-88ab-404d417a515c name=/runtime.v1.RuntimeService/CreateContainer
	Jul 23 14:42:24 functional-054469 crio[4126]: time="2024-07-23 14:42:24.560346246Z" level=warning msg="Allowed annotations are specified for workload []"
	Jul 23 14:42:24 functional-054469 crio[4126]: time="2024-07-23 14:42:24.583220785Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/916e905acfcd838985b10045696a2ec1ef208501947f7f6f82f46aeb8ef7a508/merged/etc/group: no such file or directory"
	Jul 23 14:42:24 functional-054469 crio[4126]: time="2024-07-23 14:42:24.624401106Z" level=info msg="Created container d4cc51dbcbffffe2c02b24ea69d3130d9cfb40a60a1c51ec0ed4f6c123bc7220: kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67-9t69t/dashboard-metrics-scraper" id=5f64c5cd-102e-4578-88ab-404d417a515c name=/runtime.v1.RuntimeService/CreateContainer
	Jul 23 14:42:24 functional-054469 crio[4126]: time="2024-07-23 14:42:24.625285476Z" level=info msg="Starting container: d4cc51dbcbffffe2c02b24ea69d3130d9cfb40a60a1c51ec0ed4f6c123bc7220" id=3a1839f3-640f-4bd2-99d0-20e765c15503 name=/runtime.v1.RuntimeService/StartContainer
	Jul 23 14:42:24 functional-054469 crio[4126]: time="2024-07-23 14:42:24.636848426Z" level=info msg="Started container" PID=6687 containerID=d4cc51dbcbffffe2c02b24ea69d3130d9cfb40a60a1c51ec0ed4f6c123bc7220 description=kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67-9t69t/dashboard-metrics-scraper id=3a1839f3-640f-4bd2-99d0-20e765c15503 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a6fa154f50a671096d86520b1ef38eb693503c9938484d25d47cdd1017e85f3d
	Jul 23 14:42:27 functional-054469 crio[4126]: time="2024-07-23 14:42:27.279318660Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-054469" id=ee771101-a4a7-46f1-a4ab-a470af098954 name=/runtime.v1.ImageService/ImageStatus
	Jul 23 14:42:27 functional-054469 crio[4126]: time="2024-07-23 14:42:27.279558516Z" level=info msg="Image docker.io/kicbase/echo-server:functional-054469 not found" id=ee771101-a4a7-46f1-a4ab-a470af098954 name=/runtime.v1.ImageService/ImageStatus
	Jul 23 14:42:30 functional-054469 crio[4126]: time="2024-07-23 14:42:30.702781559Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-054469" id=81af305f-87d4-4ee7-b6e0-cfa995d55d9d name=/runtime.v1.ImageService/ImageStatus
	Jul 23 14:42:30 functional-054469 crio[4126]: time="2024-07-23 14:42:30.703014638Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17,RepoTags:[docker.io/kicbase/echo-server:functional-054469],RepoDigests:[docker.io/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a],Size_:4788229,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=81af305f-87d4-4ee7-b6e0-cfa995d55d9d name=/runtime.v1.ImageService/ImageStatus
	Jul 23 14:43:05 functional-054469 crio[4126]: time="2024-07-23 14:43:05.266494111Z" level=info msg="Checking image status: docker.io/nginx:latest" id=5ebe01a7-7885-45ca-a397-17e67630a66a name=/runtime.v1.ImageService/ImageStatus
	Jul 23 14:43:05 functional-054469 crio[4126]: time="2024-07-23 14:43:05.267257339Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:443d199e8bfcce69c2aa494b36b5f8b04c3b183277cd19190e9589fd8552d618,RepoTags:[docker.io/library/nginx:latest],RepoDigests:[docker.io/library/nginx@sha256:97b83c73d3165f2deb95e02459a6e905f092260cd991f4c4eae2f192ddb99cbe docker.io/library/nginx@sha256:9a3f8e8b2777851f98c569c91f8ebd6f21b0af188c245c38a0179086bb27782e],Size_:197104786,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=5ebe01a7-7885-45ca-a397-17e67630a66a name=/runtime.v1.ImageService/ImageStatus
	Jul 23 14:43:20 functional-054469 crio[4126]: time="2024-07-23 14:43:20.266078557Z" level=info msg="Checking image status: docker.io/nginx:latest" id=255c907c-92ec-4d18-93a5-e712a7b4bfbd name=/runtime.v1.ImageService/ImageStatus
	Jul 23 14:43:20 functional-054469 crio[4126]: time="2024-07-23 14:43:20.266304349Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:443d199e8bfcce69c2aa494b36b5f8b04c3b183277cd19190e9589fd8552d618,RepoTags:[docker.io/library/nginx:latest],RepoDigests:[docker.io/library/nginx@sha256:97b83c73d3165f2deb95e02459a6e905f092260cd991f4c4eae2f192ddb99cbe docker.io/library/nginx@sha256:9a3f8e8b2777851f98c569c91f8ebd6f21b0af188c245c38a0179086bb27782e],Size_:197104786,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=255c907c-92ec-4d18-93a5-e712a7b4bfbd name=/runtime.v1.ImageService/ImageStatus
	Jul 23 14:43:20 functional-054469 crio[4126]: time="2024-07-23 14:43:20.267159730Z" level=info msg="Pulling image: docker.io/nginx:latest" id=9beaec07-b3c2-4acd-9cb5-a3852f5797a5 name=/runtime.v1.ImageService/PullImage
	Jul 23 14:43:20 functional-054469 crio[4126]: time="2024-07-23 14:43:20.269607873Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Jul 23 14:44:04 functional-054469 crio[4126]: time="2024-07-23 14:44:04.265422236Z" level=info msg="Checking image status: docker.io/nginx:latest" id=7575dfee-f2ce-446b-a580-e8ba98244ef4 name=/runtime.v1.ImageService/ImageStatus
	Jul 23 14:44:04 functional-054469 crio[4126]: time="2024-07-23 14:44:04.265636590Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:443d199e8bfcce69c2aa494b36b5f8b04c3b183277cd19190e9589fd8552d618,RepoTags:[docker.io/library/nginx:latest],RepoDigests:[docker.io/library/nginx@sha256:97b83c73d3165f2deb95e02459a6e905f092260cd991f4c4eae2f192ddb99cbe docker.io/library/nginx@sha256:9a3f8e8b2777851f98c569c91f8ebd6f21b0af188c245c38a0179086bb27782e],Size_:197104786,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=7575dfee-f2ce-446b-a580-e8ba98244ef4 name=/runtime.v1.ImageService/ImageStatus
	Jul 23 14:44:19 functional-054469 crio[4126]: time="2024-07-23 14:44:19.265719638Z" level=info msg="Checking image status: docker.io/nginx:latest" id=2d4da08f-4e14-4025-8f28-a5cd812f68f3 name=/runtime.v1.ImageService/ImageStatus
	Jul 23 14:44:19 functional-054469 crio[4126]: time="2024-07-23 14:44:19.265979088Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:443d199e8bfcce69c2aa494b36b5f8b04c3b183277cd19190e9589fd8552d618,RepoTags:[docker.io/library/nginx:latest],RepoDigests:[docker.io/library/nginx@sha256:97b83c73d3165f2deb95e02459a6e905f092260cd991f4c4eae2f192ddb99cbe docker.io/library/nginx@sha256:9a3f8e8b2777851f98c569c91f8ebd6f21b0af188c245c38a0179086bb27782e],Size_:197104786,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=2d4da08f-4e14-4025-8f28-a5cd812f68f3 name=/runtime.v1.ImageService/ImageStatus
	Jul 23 14:44:32 functional-054469 crio[4126]: time="2024-07-23 14:44:32.265457755Z" level=info msg="Checking image status: docker.io/nginx:latest" id=583e8279-ddbe-4156-9206-f6ab6f138b45 name=/runtime.v1.ImageService/ImageStatus
	Jul 23 14:44:32 functional-054469 crio[4126]: time="2024-07-23 14:44:32.265705340Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:443d199e8bfcce69c2aa494b36b5f8b04c3b183277cd19190e9589fd8552d618,RepoTags:[docker.io/library/nginx:latest],RepoDigests:[docker.io/library/nginx@sha256:97b83c73d3165f2deb95e02459a6e905f092260cd991f4c4eae2f192ddb99cbe docker.io/library/nginx@sha256:9a3f8e8b2777851f98c569c91f8ebd6f21b0af188c245c38a0179086bb27782e],Size_:197104786,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=583e8279-ddbe-4156-9206-f6ab6f138b45 name=/runtime.v1.ImageService/ImageStatus
	Jul 23 14:44:32 functional-054469 crio[4126]: time="2024-07-23 14:44:32.266194685Z" level=info msg="Pulling image: docker.io/nginx:latest" id=62badfa9-e50e-4ab8-a93a-8d68ecd07c87 name=/runtime.v1.ImageService/PullImage
	Jul 23 14:44:32 functional-054469 crio[4126]: time="2024-07-23 14:44:32.268686799Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	d4cc51dbcbfff       docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c   2 minutes ago       Running             dashboard-metrics-scraper   0                   a6fa154f50a67       dashboard-metrics-scraper-b5fc48f67-9t69t
	2174e369e948e       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93         2 minutes ago       Running             kubernetes-dashboard        0                   a72316b7b1b52       kubernetes-dashboard-779776cb65-d42nd
	77095aa5c842e       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e              2 minutes ago       Exited              mount-munger                0                   cf0e4fd65721a       busybox-mount
	1e9779bb588c8       72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb                                                 2 minutes ago       Running             echoserver-arm              0                   dc2f613a1818c       hello-node-65f5d5cc78-jc8zb
	c9e9d79e0dcf8       registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5           3 minutes ago       Running             echoserver-arm              0                   b043c2d212bb7       hello-node-connect-6f49f58cd5-llhcb
	c1c88680860f8       docker.io/library/nginx@sha256:1e67a3c8607fe555f47dc8a72f25424b10273639136c061c508628da3112f90e                  3 minutes ago       Running             nginx                       0                   cf55bb0d45b9c       nginx-svc
	b7691efffeb15       f42786f8afd2214fc59fbf9a26531806f562488d4a7d7a31e8b5e9ff6289b800                                                 3 minutes ago       Running             kindnet-cni                 2                   4d916b287a55b       kindnet-5gms5
	38e0ef28a2017       2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be                                                 3 minutes ago       Running             kube-proxy                  2                   de996d701a439       kube-proxy-dgktz
	2dbe99cc1ede7       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93                                                 3 minutes ago       Running             coredns                     2                   c04430f60816f       coredns-7db6d8ff4d-t6b76
	f7a1543cfd603       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                 3 minutes ago       Running             storage-provisioner         2                   92f6d1646abb2       storage-provisioner
	63e7a30cd7efe       61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca                                                 3 minutes ago       Running             kube-apiserver              0                   e5d403717de27       kube-apiserver-functional-054469
	9898cae8df084       014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd                                                 3 minutes ago       Running             etcd                        2                   3a230e09ee4fd       etcd-functional-054469
	68d69b47a0a2f       8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a                                                 3 minutes ago       Running             kube-controller-manager     2                   9af01479ae927       kube-controller-manager-functional-054469
	0c57428828363       d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355                                                 3 minutes ago       Running             kube-scheduler              2                   8d35e8edf7b36       kube-scheduler-functional-054469
	865af7eaf055a       014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd                                                 4 minutes ago       Exited              etcd                        1                   3a230e09ee4fd       etcd-functional-054469
	e8ff787c65c5c       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                 4 minutes ago       Exited              storage-provisioner         1                   92f6d1646abb2       storage-provisioner
	d4e2e1dfb2055       f42786f8afd2214fc59fbf9a26531806f562488d4a7d7a31e8b5e9ff6289b800                                                 4 minutes ago       Exited              kindnet-cni                 1                   4d916b287a55b       kindnet-5gms5
	bb5d3fd9be899       d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355                                                 4 minutes ago       Exited              kube-scheduler              1                   8d35e8edf7b36       kube-scheduler-functional-054469
	9ee8dd0f73ddb       2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be                                                 4 minutes ago       Exited              kube-proxy                  1                   de996d701a439       kube-proxy-dgktz
	b8ce5569eb516       8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a                                                 4 minutes ago       Exited              kube-controller-manager     1                   9af01479ae927       kube-controller-manager-functional-054469
	fc08c24a7025d       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93                                                 4 minutes ago       Exited              coredns                     1                   c04430f60816f       coredns-7db6d8ff4d-t6b76
	
	
	==> coredns [2dbe99cc1ede7da41c5c453fbb47a1ca35c2d74dbdc76ff6eb4e95f31e5f49ed] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:41569 - 43430 "HINFO IN 2441576662119557996.6050959304452424465. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.024499808s
	
	
	==> coredns [fc08c24a7025df8b2981875295820422a28b2ca4bbc0a08ec505299987a3b67e] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:51756 - 57891 "HINFO IN 2364641526566246480.6321763285545537697. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011126882s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-054469
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-054469
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8c09ef4366b737aec8aaa2bbc590bde27da814a6
	                    minikube.k8s.io/name=functional-054469
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_23T14_39_24_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 23 Jul 2024 14:39:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-054469
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 23 Jul 2024 14:44:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 23 Jul 2024 14:42:52 +0000   Tue, 23 Jul 2024 14:39:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 23 Jul 2024 14:42:52 +0000   Tue, 23 Jul 2024 14:39:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 23 Jul 2024 14:42:52 +0000   Tue, 23 Jul 2024 14:39:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 23 Jul 2024 14:42:52 +0000   Tue, 23 Jul 2024 14:39:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-054469
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	System Info:
	  Machine ID:                 7b74588390f44e33a76c9f69b52d1fad
	  System UUID:                a7f8b490-4625-45f6-ac99-89a35ccecdf1
	  Boot ID:                    95e04985-bf92-47a1-9b5b-7f09371b9e30
	  Kernel Version:             5.15.0-1065-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-65f5d5cc78-jc8zb                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m58s
	  default                     hello-node-connect-6f49f58cd5-llhcb          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m9s
	  default                     nginx-svc                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m20s
	  default                     sp-pod                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m2s
	  kube-system                 coredns-7db6d8ff4d-t6b76                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     5m1s
	  kube-system                 etcd-functional-054469                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         5m14s
	  kube-system                 kindnet-5gms5                                100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      5m1s
	  kube-system                 kube-apiserver-functional-054469             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m46s
	  kube-system                 kube-controller-manager-functional-054469    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m14s
	  kube-system                 kube-proxy-dgktz                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m1s
	  kube-system                 kube-scheduler-functional-054469             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m14s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m
	  kubernetes-dashboard        dashboard-metrics-scraper-b5fc48f67-9t69t    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m19s
	  kubernetes-dashboard        kubernetes-dashboard-779776cb65-d42nd        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m58s                  kube-proxy       
	  Normal  Starting                 3m45s                  kube-proxy       
	  Normal  Starting                 4m29s                  kube-proxy       
	  Normal  NodeHasNoDiskPressure    5m21s (x8 over 5m21s)  kubelet          Node functional-054469 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m21s (x9 over 5m21s)  kubelet          Node functional-054469 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     5m21s (x7 over 5m21s)  kubelet          Node functional-054469 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  5m14s                  kubelet          Node functional-054469 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m14s                  kubelet          Node functional-054469 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m14s                  kubelet          Node functional-054469 status is now: NodeHasSufficientPID
	  Normal  Starting                 5m14s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           5m2s                   node-controller  Node functional-054469 event: Registered Node functional-054469 in Controller
	  Normal  NodeReady                4m46s                  kubelet          Node functional-054469 status is now: NodeReady
	  Normal  RegisteredNode           4m18s                  node-controller  Node functional-054469 event: Registered Node functional-054469 in Controller
	  Normal  Starting                 3m52s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m52s (x8 over 3m52s)  kubelet          Node functional-054469 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m52s (x8 over 3m52s)  kubelet          Node functional-054469 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m52s (x8 over 3m52s)  kubelet          Node functional-054469 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m35s                  node-controller  Node functional-054469 event: Registered Node functional-054469 in Controller
	
	
	==> dmesg <==
	[  +0.001118] FS-Cache: O-key=[8] '1a733b0000000000'
	[  +0.000744] FS-Cache: N-cookie c=000000e4 [p=000000db fl=2 nc=0 na=1]
	[  +0.000993] FS-Cache: N-cookie d=00000000a817b499{9p.inode} n=00000000df96581b
	[  +0.001091] FS-Cache: N-key=[8] '1a733b0000000000'
	[  +0.003238] FS-Cache: Duplicate cookie detected
	[  +0.000706] FS-Cache: O-cookie c=000000de [p=000000db fl=226 nc=0 na=1]
	[  +0.001048] FS-Cache: O-cookie d=00000000a817b499{9p.inode} n=00000000a77d32c1
	[  +0.001108] FS-Cache: O-key=[8] '1a733b0000000000'
	[  +0.000748] FS-Cache: N-cookie c=000000e5 [p=000000db fl=2 nc=0 na=1]
	[  +0.000997] FS-Cache: N-cookie d=00000000a817b499{9p.inode} n=000000003ba586a5
	[  +0.001137] FS-Cache: N-key=[8] '1a733b0000000000'
	[  +2.731848] FS-Cache: Duplicate cookie detected
	[  +0.000728] FS-Cache: O-cookie c=000000dc [p=000000db fl=226 nc=0 na=1]
	[  +0.001029] FS-Cache: O-cookie d=00000000a817b499{9p.inode} n=00000000ee5383df
	[  +0.001099] FS-Cache: O-key=[8] '19733b0000000000'
	[  +0.000771] FS-Cache: N-cookie c=000000e7 [p=000000db fl=2 nc=0 na=1]
	[  +0.000987] FS-Cache: N-cookie d=00000000a817b499{9p.inode} n=00000000052ff0a6
	[  +0.001114] FS-Cache: N-key=[8] '19733b0000000000'
	[  +0.302039] FS-Cache: Duplicate cookie detected
	[  +0.000741] FS-Cache: O-cookie c=000000e1 [p=000000db fl=226 nc=0 na=1]
	[  +0.001026] FS-Cache: O-cookie d=00000000a817b499{9p.inode} n=000000001645a21b
	[  +0.001107] FS-Cache: O-key=[8] '1f733b0000000000'
	[  +0.000755] FS-Cache: N-cookie c=000000e8 [p=000000db fl=2 nc=0 na=1]
	[  +0.000972] FS-Cache: N-cookie d=00000000a817b499{9p.inode} n=00000000df96581b
	[  +0.001106] FS-Cache: N-key=[8] '1f733b0000000000'
	
	
	==> etcd [865af7eaf055a05ad5f9f3b97d32fd228e4ff7d05dc8fc4248f27b934fde197b] <==
	{"level":"info","ts":"2024-07-23T14:40:03.88609Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-07-23T14:40:05.518568Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-23T14:40:05.518618Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-23T14:40:05.518647Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-07-23T14:40:05.518661Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2024-07-23T14:40:05.518667Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-07-23T14:40:05.518683Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2024-07-23T14:40:05.518691Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-07-23T14:40:05.53071Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-054469 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-23T14:40:05.530838Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-23T14:40:05.532613Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-07-23T14:40:05.532791Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-23T14:40:05.533016Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-23T14:40:05.533041Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-23T14:40:05.534397Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-23T14:40:33.049103Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-23T14:40:33.049485Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"functional-054469","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"warn","ts":"2024-07-23T14:40:33.050027Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-23T14:40:33.051996Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-23T14:40:33.080629Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-23T14:40:33.08156Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-23T14:40:33.08174Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2024-07-23T14:40:33.084066Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-07-23T14:40:33.084213Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-07-23T14:40:33.084312Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"functional-054469","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [9898cae8df0840be6fdd9205e21f7ab266a90ccf79fbaa6348ebd62f74feff0b] <==
	{"level":"info","ts":"2024-07-23T14:40:46.032054Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-23T14:40:46.032128Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-23T14:40:46.034807Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2024-07-23T14:40:46.034933Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-07-23T14:40:46.03507Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-23T14:40:46.035161Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-23T14:40:46.056882Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-23T14:40:46.058596Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-07-23T14:40:46.059879Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-07-23T14:40:46.058703Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-23T14:40:46.058734Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-23T14:40:47.398556Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 3"}
	{"level":"info","ts":"2024-07-23T14:40:47.39867Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 3"}
	{"level":"info","ts":"2024-07-23T14:40:47.398713Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-07-23T14:40:47.398758Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 4"}
	{"level":"info","ts":"2024-07-23T14:40:47.398792Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 4"}
	{"level":"info","ts":"2024-07-23T14:40:47.39883Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 4"}
	{"level":"info","ts":"2024-07-23T14:40:47.398871Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 4"}
	{"level":"info","ts":"2024-07-23T14:40:47.406702Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-054469 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-23T14:40:47.406805Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-23T14:40:47.407063Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-23T14:40:47.408824Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-07-23T14:40:47.416103Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-23T14:40:47.422542Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-23T14:40:47.422634Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 14:44:37 up 23:27,  0 users,  load average: 0.38, 0.81, 1.32
	Linux functional-054469 5.15.0-1065-aws #71~20.04.1-Ubuntu SMP Fri Jun 28 19:59:49 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [b7691efffeb15a8186e6ad0f241ae960dd906abd4a94f81336ff73d7bf76e3a7] <==
	I0723 14:43:32.124191       1 main.go:299] handling current node
	I0723 14:43:42.123524       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0723 14:43:42.123566       1 main.go:299] handling current node
	W0723 14:43:47.546576       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0723 14:43:47.546610       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0723 14:43:52.124128       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0723 14:43:52.124164       1 main.go:299] handling current node
	W0723 14:43:55.487559       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0723 14:43:55.487718       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	W0723 14:44:01.464718       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0723 14:44:01.464756       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0723 14:44:02.123432       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0723 14:44:02.123465       1 main.go:299] handling current node
	I0723 14:44:12.124145       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0723 14:44:12.124180       1 main.go:299] handling current node
	I0723 14:44:22.123461       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0723 14:44:22.123496       1 main.go:299] handling current node
	W0723 14:44:25.358747       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0723 14:44:25.358799       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0723 14:44:32.124243       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0723 14:44:32.124288       1 main.go:299] handling current node
	W0723 14:44:33.896544       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0723 14:44:33.896578       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	W0723 14:44:34.261175       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0723 14:44:34.261221       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	
	
	==> kindnet [d4e2e1dfb2055583d39756e7f15ad11c27a3c8d890f63bac8b805bd88642e7a9] <==
	E0723 14:40:09.650604       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	W0723 14:40:09.775131       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0723 14:40:09.775167       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	W0723 14:40:11.863883       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0723 14:40:11.863917       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	W0723 14:40:12.223927       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0723 14:40:12.223972       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	W0723 14:40:12.597144       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0723 14:40:12.597176       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0723 14:40:14.045798       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0723 14:40:14.045865       1 main.go:299] handling current node
	W0723 14:40:15.186956       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0723 14:40:15.186988       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	W0723 14:40:16.232816       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0723 14:40:16.232858       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	W0723 14:40:17.131304       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0723 14:40:17.131339       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0723 14:40:24.045470       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0723 14:40:24.045606       1 main.go:299] handling current node
	W0723 14:40:24.047381       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0723 14:40:24.047417       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	W0723 14:40:24.759235       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0723 14:40:24.759274       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	W0723 14:40:26.979163       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0723 14:40:26.979197       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	
	
	==> kube-apiserver [63e7a30cd7efe48d73e22a754c3b43101556b99be7564149148240f7796aec97] <==
	I0723 14:40:50.436752       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0723 14:40:50.436862       1 policy_source.go:224] refreshing policies
	I0723 14:40:50.436946       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0723 14:40:50.437066       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0723 14:40:50.441760       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0723 14:40:50.449298       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0723 14:40:50.451186       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	E0723 14:40:50.468512       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0723 14:40:51.173927       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0723 14:40:52.282596       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0723 14:40:52.439674       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0723 14:40:52.453410       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0723 14:40:52.582783       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0723 14:40:52.596719       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0723 14:41:09.177641       1 controller.go:615] quota admission added evaluator for: endpoints
	I0723 14:41:11.397833       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.107.167.98"}
	I0723 14:41:11.423652       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0723 14:41:17.614388       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.101.11.2"}
	I0723 14:41:28.097535       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0723 14:41:28.210078       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.100.88.117"}
	E0723 14:41:34.340767       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:32794: use of closed network connection
	I0723 14:41:39.763802       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.99.255.122"}
	I0723 14:42:18.092717       1 controller.go:615] quota admission added evaluator for: namespaces
	I0723 14:42:18.351252       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.247.105"}
	I0723 14:42:18.374987       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.44.114"}
	
	
	==> kube-controller-manager [68d69b47a0a2fcbcde6db66caef7c5d5cdc07b9775cce06b3d3df5ecab2c1eda] <==
	I0723 14:41:40.642702       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="36.251µs"
	I0723 14:42:18.182238       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="29.742001ms"
	E0723 14:42:18.182276       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0723 14:42:18.200143       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="17.562738ms"
	E0723 14:42:18.200179       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0723 14:42:18.221272       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="21.054238ms"
	E0723 14:42:18.221411       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0723 14:42:18.222185       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="46.816863ms"
	E0723 14:42:18.222306       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-779776cb65" failed with pods "kubernetes-dashboard-779776cb65-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0723 14:42:18.234274       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="11.759368ms"
	E0723 14:42:18.234311       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-779776cb65" failed with pods "kubernetes-dashboard-779776cb65-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0723 14:42:18.234749       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="13.303251ms"
	E0723 14:42:18.234774       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0723 14:42:18.279048       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="44.706127ms"
	I0723 14:42:18.288826       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="46.371523ms"
	I0723 14:42:18.296574       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="17.468837ms"
	I0723 14:42:18.296680       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="65.174µs"
	I0723 14:42:18.303454       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="56.001µs"
	I0723 14:42:18.310689       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="21.808957ms"
	I0723 14:42:18.310977       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="116.49µs"
	I0723 14:42:18.328139       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="96.436µs"
	I0723 14:42:23.754289       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="17.934042ms"
	I0723 14:42:23.755605       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="53.909µs"
	I0723 14:42:24.745520       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="6.084975ms"
	I0723 14:42:24.745600       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="38.918µs"
	
	
	==> kube-controller-manager [b8ce5569eb516e91d3c21c1b2d707e311eb94272376b15ad9453dc310b6cf9df] <==
	I0723 14:40:19.303651       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0723 14:40:19.303707       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0723 14:40:19.306112       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0723 14:40:19.312405       1 shared_informer.go:320] Caches are synced for taint
	I0723 14:40:19.312500       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0723 14:40:19.312593       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-054469"
	I0723 14:40:19.312653       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0723 14:40:19.317910       1 shared_informer.go:320] Caches are synced for ephemeral
	I0723 14:40:19.320220       1 shared_informer.go:320] Caches are synced for endpoint
	I0723 14:40:19.321487       1 shared_informer.go:320] Caches are synced for TTL
	I0723 14:40:19.328741       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0723 14:40:19.335319       1 shared_informer.go:320] Caches are synced for GC
	I0723 14:40:19.348651       1 shared_informer.go:320] Caches are synced for attach detach
	I0723 14:40:19.352010       1 shared_informer.go:320] Caches are synced for PV protection
	I0723 14:40:19.383859       1 shared_informer.go:320] Caches are synced for daemon sets
	I0723 14:40:19.388003       1 shared_informer.go:320] Caches are synced for stateful set
	I0723 14:40:19.395213       1 shared_informer.go:320] Caches are synced for service account
	I0723 14:40:19.439784       1 shared_informer.go:320] Caches are synced for disruption
	I0723 14:40:19.475446       1 shared_informer.go:320] Caches are synced for namespace
	I0723 14:40:19.477755       1 shared_informer.go:320] Caches are synced for deployment
	I0723 14:40:19.531336       1 shared_informer.go:320] Caches are synced for resource quota
	I0723 14:40:19.533798       1 shared_informer.go:320] Caches are synced for resource quota
	I0723 14:40:19.973837       1 shared_informer.go:320] Caches are synced for garbage collector
	I0723 14:40:20.001198       1 shared_informer.go:320] Caches are synced for garbage collector
	I0723 14:40:20.001233       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [38e0ef28a2017bb6bb670597838e87f47970bac4485cb26bfcefcc6121825982] <==
	I0723 14:40:51.852693       1 server_linux.go:69] "Using iptables proxy"
	I0723 14:40:51.867930       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0723 14:40:51.935789       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0723 14:40:51.935854       1 server_linux.go:165] "Using iptables Proxier"
	I0723 14:40:51.946436       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0723 14:40:51.946467       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0723 14:40:51.946507       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0723 14:40:51.946750       1 server.go:872] "Version info" version="v1.30.3"
	I0723 14:40:51.946773       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0723 14:40:51.950964       1 config.go:192] "Starting service config controller"
	I0723 14:40:51.951067       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0723 14:40:51.951143       1 config.go:101] "Starting endpoint slice config controller"
	I0723 14:40:51.951183       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0723 14:40:51.951656       1 config.go:319] "Starting node config controller"
	I0723 14:40:51.951720       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0723 14:40:52.052035       1 shared_informer.go:320] Caches are synced for node config
	I0723 14:40:52.052071       1 shared_informer.go:320] Caches are synced for service config
	I0723 14:40:52.052106       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [9ee8dd0f73ddbf0cd56d70e55c527d6273f9e90b21e1368d54a5d5f6df34cd4a] <==
	I0723 14:40:04.775702       1 server_linux.go:69] "Using iptables proxy"
	I0723 14:40:08.296199       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0723 14:40:08.478025       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0723 14:40:08.478151       1 server_linux.go:165] "Using iptables Proxier"
	I0723 14:40:08.487016       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0723 14:40:08.487065       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0723 14:40:08.487094       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0723 14:40:08.487363       1 server.go:872] "Version info" version="v1.30.3"
	I0723 14:40:08.487383       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0723 14:40:08.504446       1 config.go:192] "Starting service config controller"
	I0723 14:40:08.504556       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0723 14:40:08.504633       1 config.go:101] "Starting endpoint slice config controller"
	I0723 14:40:08.504669       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0723 14:40:08.505747       1 config.go:319] "Starting node config controller"
	I0723 14:40:08.505819       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0723 14:40:08.605775       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0723 14:40:08.605831       1 shared_informer.go:320] Caches are synced for service config
	I0723 14:40:08.606078       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [0c57428828363007bc4ad1771392698bb0e47852b0ffee1de09afcbde40a7b1c] <==
	I0723 14:40:47.425063       1 serving.go:380] Generated self-signed cert in-memory
	I0723 14:40:50.482805       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0723 14:40:50.482915       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0723 14:40:50.490960       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0723 14:40:50.491060       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0723 14:40:50.491131       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0723 14:40:50.491944       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0723 14:40:50.491974       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0723 14:40:50.492426       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0723 14:40:50.491983       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0723 14:40:50.499875       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0723 14:40:50.591669       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0723 14:40:50.594241       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0723 14:40:50.601045       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	
	==> kube-scheduler [bb5d3fd9be899db0048dce44ed8d68fb6a72847557e491a3fa6c30389d4a4a7b] <==
	I0723 14:40:06.286023       1 serving.go:380] Generated self-signed cert in-memory
	W0723 14:40:08.047230       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0723 14:40:08.047363       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0723 14:40:08.047401       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0723 14:40:08.047445       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0723 14:40:08.205866       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0723 14:40:08.205969       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0723 14:40:08.213767       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0723 14:40:08.214107       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0723 14:40:08.214573       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0723 14:40:08.214637       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0723 14:40:08.315341       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0723 14:40:33.048316       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0723 14:40:33.048376       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0723 14:40:33.048493       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 23 14:42:09 functional-054469 kubelet[4464]: I0723 14:42:09.756913    4464 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e06838c0-da4f-4a3f-97d5-48d4d262f59b-kube-api-access-mhwwg" (OuterVolumeSpecName: "kube-api-access-mhwwg") pod "e06838c0-da4f-4a3f-97d5-48d4d262f59b" (UID: "e06838c0-da4f-4a3f-97d5-48d4d262f59b"). InnerVolumeSpecName "kube-api-access-mhwwg". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 23 14:42:09 functional-054469 kubelet[4464]: I0723 14:42:09.855152    4464 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-mhwwg\" (UniqueName: \"kubernetes.io/projected/e06838c0-da4f-4a3f-97d5-48d4d262f59b-kube-api-access-mhwwg\") on node \"functional-054469\" DevicePath \"\""
	Jul 23 14:42:09 functional-054469 kubelet[4464]: I0723 14:42:09.855193    4464 reconciler_common.go:289] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/e06838c0-da4f-4a3f-97d5-48d4d262f59b-test-volume\") on node \"functional-054469\" DevicePath \"\""
	Jul 23 14:42:10 functional-054469 kubelet[4464]: I0723 14:42:10.692432    4464 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf0e4fd65721a39270fd25fc6449bb48e9c27a196b580c572b2c4dde671d467b"
	Jul 23 14:42:18 functional-054469 kubelet[4464]: I0723 14:42:18.258424    4464 topology_manager.go:215] "Topology Admit Handler" podUID="6f123475-eaf4-48e4-b5ed-7764d7083bc6" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-779776cb65-d42nd"
	Jul 23 14:42:18 functional-054469 kubelet[4464]: E0723 14:42:18.258508    4464 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e06838c0-da4f-4a3f-97d5-48d4d262f59b" containerName="mount-munger"
	Jul 23 14:42:18 functional-054469 kubelet[4464]: I0723 14:42:18.258559    4464 memory_manager.go:354] "RemoveStaleState removing state" podUID="e06838c0-da4f-4a3f-97d5-48d4d262f59b" containerName="mount-munger"
	Jul 23 14:42:18 functional-054469 kubelet[4464]: I0723 14:42:18.274141    4464 topology_manager.go:215] "Topology Admit Handler" podUID="0490e0d0-e44e-46db-88bb-723ff20f9dbb" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-b5fc48f67-9t69t"
	Jul 23 14:42:18 functional-054469 kubelet[4464]: I0723 14:42:18.314086    4464 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsgc5\" (UniqueName: \"kubernetes.io/projected/0490e0d0-e44e-46db-88bb-723ff20f9dbb-kube-api-access-jsgc5\") pod \"dashboard-metrics-scraper-b5fc48f67-9t69t\" (UID: \"0490e0d0-e44e-46db-88bb-723ff20f9dbb\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67-9t69t"
	Jul 23 14:42:18 functional-054469 kubelet[4464]: I0723 14:42:18.314146    4464 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/6f123475-eaf4-48e4-b5ed-7764d7083bc6-tmp-volume\") pod \"kubernetes-dashboard-779776cb65-d42nd\" (UID: \"6f123475-eaf4-48e4-b5ed-7764d7083bc6\") " pod="kubernetes-dashboard/kubernetes-dashboard-779776cb65-d42nd"
	Jul 23 14:42:18 functional-054469 kubelet[4464]: I0723 14:42:18.314174    4464 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/0490e0d0-e44e-46db-88bb-723ff20f9dbb-tmp-volume\") pod \"dashboard-metrics-scraper-b5fc48f67-9t69t\" (UID: \"0490e0d0-e44e-46db-88bb-723ff20f9dbb\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67-9t69t"
	Jul 23 14:42:18 functional-054469 kubelet[4464]: I0723 14:42:18.314198    4464 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgpxq\" (UniqueName: \"kubernetes.io/projected/6f123475-eaf4-48e4-b5ed-7764d7083bc6-kube-api-access-hgpxq\") pod \"kubernetes-dashboard-779776cb65-d42nd\" (UID: \"6f123475-eaf4-48e4-b5ed-7764d7083bc6\") " pod="kubernetes-dashboard/kubernetes-dashboard-779776cb65-d42nd"
	Jul 23 14:42:23 functional-054469 kubelet[4464]: I0723 14:42:23.735063    4464 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-779776cb65-d42nd" podStartSLOduration=1.369620007 podStartE2EDuration="5.735041564s" podCreationTimestamp="2024-07-23 14:42:18 +0000 UTC" firstStartedPulling="2024-07-23 14:42:18.606587919 +0000 UTC m=+93.543629096" lastFinishedPulling="2024-07-23 14:42:22.972009484 +0000 UTC m=+97.909050653" observedRunningTime="2024-07-23 14:42:23.734606603 +0000 UTC m=+98.671647780" watchObservedRunningTime="2024-07-23 14:42:23.735041564 +0000 UTC m=+98.672082733"
	Jul 23 14:42:54 functional-054469 kubelet[4464]: E0723 14:42:54.849125    4464 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Jul 23 14:42:54 functional-054469 kubelet[4464]: E0723 14:42:54.849193    4464 kuberuntime_image.go:55] "Failed to pull image" err="reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Jul 23 14:42:54 functional-054469 kubelet[4464]: E0723 14:42:54.849295    4464 kuberuntime_manager.go:1256] container &Container{Name:myfrontend,Image:docker.io/nginx,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mypd,ReadOnly:false,MountPath:/tmp/mount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6rmmg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start
failed in pod sp-pod_default(296b44b0-dfed-4eee-b2fe-faa14e3288ce): ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Jul 23 14:42:54 functional-054469 kubelet[4464]: E0723 14:42:54.849325    4464 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="296b44b0-dfed-4eee-b2fe-faa14e3288ce"
	Jul 23 14:43:05 functional-054469 kubelet[4464]: E0723 14:43:05.267672    4464 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/sp-pod" podUID="296b44b0-dfed-4eee-b2fe-faa14e3288ce"
	Jul 23 14:43:05 functional-054469 kubelet[4464]: I0723 14:43:05.278651    4464 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67-9t69t" podStartSLOduration=41.350148898 podStartE2EDuration="47.278632648s" podCreationTimestamp="2024-07-23 14:42:18 +0000 UTC" firstStartedPulling="2024-07-23 14:42:18.626260843 +0000 UTC m=+93.563302012" lastFinishedPulling="2024-07-23 14:42:24.554744495 +0000 UTC m=+99.491785762" observedRunningTime="2024-07-23 14:42:24.739006775 +0000 UTC m=+99.676047952" watchObservedRunningTime="2024-07-23 14:43:05.278632648 +0000 UTC m=+140.215673817"
	Jul 23 14:43:50 functional-054469 kubelet[4464]: E0723 14:43:50.664330    4464 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = loading manifest for target platform: reading manifest sha256:9a3f8e8b2777851f98c569c91f8ebd6f21b0af188c245c38a0179086bb27782e in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Jul 23 14:43:50 functional-054469 kubelet[4464]: E0723 14:43:50.664397    4464 kuberuntime_image.go:55] "Failed to pull image" err="loading manifest for target platform: reading manifest sha256:9a3f8e8b2777851f98c569c91f8ebd6f21b0af188c245c38a0179086bb27782e in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Jul 23 14:43:50 functional-054469 kubelet[4464]: E0723 14:43:50.664494    4464 kuberuntime_manager.go:1256] container &Container{Name:myfrontend,Image:docker.io/nginx,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mypd,ReadOnly:false,MountPath:/tmp/mount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6rmmg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start
failed in pod sp-pod_default(296b44b0-dfed-4eee-b2fe-faa14e3288ce): ErrImagePull: loading manifest for target platform: reading manifest sha256:9a3f8e8b2777851f98c569c91f8ebd6f21b0af188c245c38a0179086bb27782e in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Jul 23 14:43:50 functional-054469 kubelet[4464]: E0723 14:43:50.664526    4464 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"loading manifest for target platform: reading manifest sha256:9a3f8e8b2777851f98c569c91f8ebd6f21b0af188c245c38a0179086bb27782e in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="296b44b0-dfed-4eee-b2fe-faa14e3288ce"
	Jul 23 14:44:04 functional-054469 kubelet[4464]: E0723 14:44:04.265881    4464 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/sp-pod" podUID="296b44b0-dfed-4eee-b2fe-faa14e3288ce"
	Jul 23 14:44:19 functional-054469 kubelet[4464]: E0723 14:44:19.266387    4464 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/sp-pod" podUID="296b44b0-dfed-4eee-b2fe-faa14e3288ce"
	
	
	==> kubernetes-dashboard [2174e369e948e619ae1ad032ba8fe272675c9c1a6fb29b4119d6c6f798aaf7b5] <==
	2024/07/23 14:42:23 Using namespace: kubernetes-dashboard
	2024/07/23 14:42:23 Using in-cluster config to connect to apiserver
	2024/07/23 14:42:23 Using secret token for csrf signing
	2024/07/23 14:42:23 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/07/23 14:42:23 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/07/23 14:42:23 Successful initial request to the apiserver, version: v1.30.3
	2024/07/23 14:42:23 Generating JWE encryption key
	2024/07/23 14:42:23 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/07/23 14:42:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/07/23 14:42:23 Initializing JWE encryption key from synchronized object
	2024/07/23 14:42:23 Creating in-cluster Sidecar client
	2024/07/23 14:42:23 Serving insecurely on HTTP port: 9090
	2024/07/23 14:42:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/07/23 14:42:53 Successful request to sidecar
	2024/07/23 14:42:23 Starting overwatch
	
	
	==> storage-provisioner [e8ff787c65c5cc0be014df812f09fe4da336cddcc8451ee932434c4463ca3727] <==
	I0723 14:40:04.432523       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0723 14:40:08.284008       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0723 14:40:08.295231       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0723 14:40:25.702169       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0723 14:40:25.702362       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-054469_0bc3c87b-db4d-470d-952a-33043b7e1d36!
	I0723 14:40:25.702812       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8628487c-5762-45db-9b0f-42e8c627871e", APIVersion:"v1", ResourceVersion:"537", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-054469_0bc3c87b-db4d-470d-952a-33043b7e1d36 became leader
	I0723 14:40:25.805072       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-054469_0bc3c87b-db4d-470d-952a-33043b7e1d36!
	
	
	==> storage-provisioner [f7a1543cfd6038a1fde7d7887cd1e774d04f9f8a9a87055b6fde99aed988a3b7] <==
	I0723 14:40:51.739653       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0723 14:40:51.771801       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0723 14:40:51.772569       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0723 14:41:09.182212       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0723 14:41:09.184155       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8628487c-5762-45db-9b0f-42e8c627871e", APIVersion:"v1", ResourceVersion:"631", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-054469_00863b44-1fba-4556-bb02-bd33dd993504 became leader
	I0723 14:41:09.186364       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-054469_00863b44-1fba-4556-bb02-bd33dd993504!
	I0723 14:41:09.287023       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-054469_00863b44-1fba-4556-bb02-bd33dd993504!
	I0723 14:41:22.955753       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0723 14:41:22.955893       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    05637baf-a6c3-456f-857a-6453192fb007 389 0 2024-07-23 14:39:37 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-07-23 14:39:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-d0dc39a8-a869-4063-b9a5-710ab68985c4 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  d0dc39a8-a869-4063-b9a5-710ab68985c4 692 0 2024-07-23 14:41:22 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-07-23 14:41:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-07-23 14:41:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0723 14:41:22.956488       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"d0dc39a8-a869-4063-b9a5-710ab68985c4", APIVersion:"v1", ResourceVersion:"692", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0723 14:41:22.958597       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-d0dc39a8-a869-4063-b9a5-710ab68985c4" provisioned
	I0723 14:41:22.958649       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0723 14:41:22.958679       1 volume_store.go:212] Trying to save persistentvolume "pvc-d0dc39a8-a869-4063-b9a5-710ab68985c4"
	I0723 14:41:22.968477       1 volume_store.go:219] persistentvolume "pvc-d0dc39a8-a869-4063-b9a5-710ab68985c4" saved
	I0723 14:41:22.968840       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"d0dc39a8-a869-4063-b9a5-710ab68985c4", APIVersion:"v1", ResourceVersion:"692", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-d0dc39a8-a869-4063-b9a5-710ab68985c4
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-054469 -n functional-054469
helpers_test.go:261: (dbg) Run:  kubectl --context functional-054469 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount sp-pod
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-054469 describe pod busybox-mount sp-pod
helpers_test.go:282: (dbg) kubectl --context functional-054469 describe pod busybox-mount sp-pod:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-054469/192.168.49.2
	Start Time:       Tue, 23 Jul 2024 14:41:51 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  mount-munger:
	    Container ID:  cri-o://77095aa5c842e53b8e8c76b0e4882c48f549d784ce1c2264e7e541ae0186c715
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Tue, 23 Jul 2024 14:42:08 +0000
	      Finished:     Tue, 23 Jul 2024 14:42:08 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mhwwg (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-mhwwg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  2m47s  default-scheduler  Successfully assigned default/busybox-mount to functional-054469
	  Normal  Pulling    2m47s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     2m30s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.66s (16.166s including waiting). Image size: 3774172 bytes.
	  Normal  Created    2m30s  kubelet            Created container mount-munger
	  Normal  Started    2m30s  kubelet            Started container mount-munger
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-054469/192.168.49.2
	Start Time:       Tue, 23 Jul 2024 14:41:35 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6rmmg (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-6rmmg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  3m3s                  default-scheduler  Successfully assigned default/sp-pod to functional-054469
	  Warning  Failed     104s (x2 over 2m32s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     48s (x3 over 2m32s)   kubelet            Error: ErrImagePull
	  Warning  Failed     48s                   kubelet            Failed to pull image "docker.io/nginx": loading manifest for target platform: reading manifest sha256:9a3f8e8b2777851f98c569c91f8ebd6f21b0af188c245c38a0179086bb27782e in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   BackOff    19s (x4 over 2m32s)   kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     19s (x4 over 2m32s)   kubelet            Error: ImagePullBackOff
	  Normal   Pulling    6s (x4 over 3m2s)     kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (201.20s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (26.84s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-864402 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-864402 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (19.838172358s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-864402] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19319
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19319-3317687/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19319-3317687/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "pause-864402" primary control-plane node in "pause-864402" cluster
	* Pulling base image v0.0.44-1721687125-19319 ...
	* Updating the running docker "pause-864402" container ...
	* Preparing Kubernetes v1.30.3 on CRI-O 1.24.6 ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-864402" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 15:13:47.001018 3472620 out.go:291] Setting OutFile to fd 1 ...
	I0723 15:13:47.001213 3472620 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 15:13:47.001239 3472620 out.go:304] Setting ErrFile to fd 2...
	I0723 15:13:47.001255 3472620 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 15:13:47.001723 3472620 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19319-3317687/.minikube/bin
	I0723 15:13:47.002320 3472620 out.go:298] Setting JSON to false
	I0723 15:13:47.003537 3472620 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":86173,"bootTime":1721661454,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0723 15:13:47.003664 3472620 start.go:139] virtualization:  
	I0723 15:13:47.008466 3472620 out.go:177] * [pause-864402] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0723 15:13:47.010890 3472620 notify.go:220] Checking for updates...
	I0723 15:13:47.014829 3472620 out.go:177]   - MINIKUBE_LOCATION=19319
	I0723 15:13:47.018484 3472620 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0723 15:13:47.022110 3472620 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19319-3317687/kubeconfig
	I0723 15:13:47.025197 3472620 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19319-3317687/.minikube
	I0723 15:13:47.027611 3472620 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0723 15:13:47.030067 3472620 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0723 15:13:47.033066 3472620 config.go:182] Loaded profile config "pause-864402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 15:13:47.033645 3472620 driver.go:392] Setting default libvirt URI to qemu:///system
	I0723 15:13:47.069289 3472620 docker.go:123] docker version: linux-27.1.0:Docker Engine - Community
	I0723 15:13:47.069395 3472620 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0723 15:13:47.135704 3472620 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:63 SystemTime:2024-07-23 15:13:47.119179074 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1065-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.0]] Warnings:<nil>}}
	I0723 15:13:47.135820 3472620 docker.go:307] overlay module found
	I0723 15:13:47.139052 3472620 out.go:177] * Using the docker driver based on existing profile
	I0723 15:13:47.141474 3472620 start.go:297] selected driver: docker
	I0723 15:13:47.141494 3472620 start.go:901] validating driver "docker" against &{Name:pause-864402 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:pause-864402 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-alias
es:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 15:13:47.141640 3472620 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0723 15:13:47.141740 3472620 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0723 15:13:47.196002 3472620 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:63 SystemTime:2024-07-23 15:13:47.186899926 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1065-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.0]] Warnings:<nil>}}
	I0723 15:13:47.196424 3472620 cni.go:84] Creating CNI manager for ""
	I0723 15:13:47.196443 3472620 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0723 15:13:47.196500 3472620 start.go:340] cluster config:
	{Name:pause-864402 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:pause-864402 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-glus
ter:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 15:13:47.199551 3472620 out.go:177] * Starting "pause-864402" primary control-plane node in "pause-864402" cluster
	I0723 15:13:47.202230 3472620 cache.go:121] Beginning downloading kic base image for docker with crio
	I0723 15:13:47.204911 3472620 out.go:177] * Pulling base image v0.0.44-1721687125-19319 ...
	I0723 15:13:47.207514 3472620 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0723 15:13:47.207546 3472620 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae in local docker daemon
	I0723 15:13:47.207608 3472620 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19319-3317687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4
	I0723 15:13:47.207617 3472620 cache.go:56] Caching tarball of preloaded images
	I0723 15:13:47.207690 3472620 preload.go:172] Found /home/jenkins/minikube-integration/19319-3317687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0723 15:13:47.207700 3472620 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0723 15:13:47.207831 3472620 profile.go:143] Saving config to /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/pause-864402/config.json ...
	W0723 15:13:47.226177 3472620 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae is of wrong architecture
	I0723 15:13:47.226196 3472620 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae to local cache
	I0723 15:13:47.226286 3472620 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae in local cache directory
	I0723 15:13:47.226304 3472620 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae in local cache directory, skipping pull
	I0723 15:13:47.226308 3472620 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae exists in cache, skipping pull
	I0723 15:13:47.226384 3472620 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae as a tarball
	I0723 15:13:47.226397 3472620 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae from local cache
	I0723 15:13:47.341173 3472620 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae from cached tarball
	I0723 15:13:47.341227 3472620 cache.go:194] Successfully downloaded all kic artifacts
	I0723 15:13:47.341267 3472620 start.go:360] acquireMachinesLock for pause-864402: {Name:mk16aaf1e6035d192404df0dce2a44c7b74398bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0723 15:13:47.341337 3472620 start.go:364] duration metric: took 43.102µs to acquireMachinesLock for "pause-864402"
	I0723 15:13:47.341362 3472620 start.go:96] Skipping create...Using existing machine configuration
	I0723 15:13:47.341371 3472620 fix.go:54] fixHost starting: 
	I0723 15:13:47.341648 3472620 cli_runner.go:164] Run: docker container inspect pause-864402 --format={{.State.Status}}
	I0723 15:13:47.358185 3472620 fix.go:112] recreateIfNeeded on pause-864402: state=Running err=<nil>
	W0723 15:13:47.358212 3472620 fix.go:138] unexpected machine state, will restart: <nil>
	I0723 15:13:47.361216 3472620 out.go:177] * Updating the running docker "pause-864402" container ...
	I0723 15:13:47.363632 3472620 machine.go:94] provisionDockerMachine start ...
	I0723 15:13:47.363748 3472620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-864402
	I0723 15:13:47.380450 3472620 main.go:141] libmachine: Using SSH client type: native
	I0723 15:13:47.380711 3472620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 37362 <nil> <nil>}
	I0723 15:13:47.380727 3472620 main.go:141] libmachine: About to run SSH command:
	hostname
	I0723 15:13:47.502134 3472620 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-864402
	
	I0723 15:13:47.502156 3472620 ubuntu.go:169] provisioning hostname "pause-864402"
	I0723 15:13:47.502219 3472620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-864402
	I0723 15:13:47.521855 3472620 main.go:141] libmachine: Using SSH client type: native
	I0723 15:13:47.522116 3472620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 37362 <nil> <nil>}
	I0723 15:13:47.522127 3472620 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-864402 && echo "pause-864402" | sudo tee /etc/hostname
	I0723 15:13:47.659334 3472620 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-864402
	
	I0723 15:13:47.659435 3472620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-864402
	I0723 15:13:47.677144 3472620 main.go:141] libmachine: Using SSH client type: native
	I0723 15:13:47.677429 3472620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 37362 <nil> <nil>}
	I0723 15:13:47.677451 3472620 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-864402' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-864402/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-864402' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0723 15:13:47.802498 3472620 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0723 15:13:47.802540 3472620 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19319-3317687/.minikube CaCertPath:/home/jenkins/minikube-integration/19319-3317687/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19319-3317687/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19319-3317687/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19319-3317687/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19319-3317687/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19319-3317687/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19319-3317687/.minikube}
	I0723 15:13:47.802567 3472620 ubuntu.go:177] setting up certificates
	I0723 15:13:47.802577 3472620 provision.go:84] configureAuth start
	I0723 15:13:47.802645 3472620 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-864402
	I0723 15:13:47.819138 3472620 provision.go:143] copyHostCerts
	I0723 15:13:47.819240 3472620 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-3317687/.minikube/ca.pem, removing ...
	I0723 15:13:47.819254 3472620 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-3317687/.minikube/ca.pem
	I0723 15:13:47.819334 3472620 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-3317687/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19319-3317687/.minikube/ca.pem (1082 bytes)
	I0723 15:13:47.819433 3472620 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-3317687/.minikube/cert.pem, removing ...
	I0723 15:13:47.819448 3472620 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-3317687/.minikube/cert.pem
	I0723 15:13:47.819478 3472620 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-3317687/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19319-3317687/.minikube/cert.pem (1123 bytes)
	I0723 15:13:47.819547 3472620 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-3317687/.minikube/key.pem, removing ...
	I0723 15:13:47.819557 3472620 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-3317687/.minikube/key.pem
	I0723 15:13:47.819583 3472620 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-3317687/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19319-3317687/.minikube/key.pem (1679 bytes)
	I0723 15:13:47.819631 3472620 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19319-3317687/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19319-3317687/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19319-3317687/.minikube/certs/ca-key.pem org=jenkins.pause-864402 san=[127.0.0.1 192.168.76.2 localhost minikube pause-864402]
	I0723 15:13:48.182937 3472620 provision.go:177] copyRemoteCerts
	I0723 15:13:48.183005 3472620 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0723 15:13:48.183053 3472620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-864402
	I0723 15:13:48.208171 3472620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37362 SSHKeyPath:/home/jenkins/minikube-integration/19319-3317687/.minikube/machines/pause-864402/id_rsa Username:docker}
	I0723 15:13:48.300711 3472620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3317687/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0723 15:13:48.326572 3472620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3317687/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0723 15:13:48.350639 3472620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3317687/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0723 15:13:48.375194 3472620 provision.go:87] duration metric: took 572.592599ms to configureAuth
	I0723 15:13:48.375228 3472620 ubuntu.go:193] setting minikube options for container-runtime
	I0723 15:13:48.375443 3472620 config.go:182] Loaded profile config "pause-864402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 15:13:48.375559 3472620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-864402
	I0723 15:13:48.392043 3472620 main.go:141] libmachine: Using SSH client type: native
	I0723 15:13:48.392291 3472620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 37362 <nil> <nil>}
	I0723 15:13:48.392311 3472620 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0723 15:13:53.773779 3472620 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0723 15:13:53.773804 3472620 machine.go:97] duration metric: took 6.410154461s to provisionDockerMachine
	I0723 15:13:53.773817 3472620 start.go:293] postStartSetup for "pause-864402" (driver="docker")
	I0723 15:13:53.773833 3472620 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0723 15:13:53.773906 3472620 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0723 15:13:53.773973 3472620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-864402
	I0723 15:13:53.791890 3472620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37362 SSHKeyPath:/home/jenkins/minikube-integration/19319-3317687/.minikube/machines/pause-864402/id_rsa Username:docker}
	I0723 15:13:53.883547 3472620 ssh_runner.go:195] Run: cat /etc/os-release
	I0723 15:13:53.887349 3472620 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0723 15:13:53.887384 3472620 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0723 15:13:53.887395 3472620 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0723 15:13:53.887402 3472620 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0723 15:13:53.887412 3472620 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-3317687/.minikube/addons for local assets ...
	I0723 15:13:53.887470 3472620 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-3317687/.minikube/files for local assets ...
	I0723 15:13:53.887553 3472620 filesync.go:149] local asset: /home/jenkins/minikube-integration/19319-3317687/.minikube/files/etc/ssl/certs/33230802.pem -> 33230802.pem in /etc/ssl/certs
	I0723 15:13:53.887659 3472620 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0723 15:13:53.896377 3472620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3317687/.minikube/files/etc/ssl/certs/33230802.pem --> /etc/ssl/certs/33230802.pem (1708 bytes)
	I0723 15:13:53.921262 3472620 start.go:296] duration metric: took 147.429547ms for postStartSetup
	I0723 15:13:53.921344 3472620 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0723 15:13:53.921398 3472620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-864402
	I0723 15:13:53.937806 3472620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37362 SSHKeyPath:/home/jenkins/minikube-integration/19319-3317687/.minikube/machines/pause-864402/id_rsa Username:docker}
	I0723 15:13:54.031674 3472620 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0723 15:13:54.037551 3472620 fix.go:56] duration metric: took 6.696172806s for fixHost
	I0723 15:13:54.037590 3472620 start.go:83] releasing machines lock for "pause-864402", held for 6.696234353s
	I0723 15:13:54.037667 3472620 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-864402
	I0723 15:13:54.078382 3472620 ssh_runner.go:195] Run: cat /version.json
	I0723 15:13:54.078440 3472620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-864402
	I0723 15:13:54.078800 3472620 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0723 15:13:54.078868 3472620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-864402
	I0723 15:13:54.104406 3472620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37362 SSHKeyPath:/home/jenkins/minikube-integration/19319-3317687/.minikube/machines/pause-864402/id_rsa Username:docker}
	I0723 15:13:54.113336 3472620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37362 SSHKeyPath:/home/jenkins/minikube-integration/19319-3317687/.minikube/machines/pause-864402/id_rsa Username:docker}
	I0723 15:13:54.206580 3472620 ssh_runner.go:195] Run: systemctl --version
	I0723 15:13:54.350847 3472620 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0723 15:13:54.508782 3472620 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0723 15:13:54.513127 3472620 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0723 15:13:54.522226 3472620 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0723 15:13:54.522312 3472620 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0723 15:13:54.531250 3472620 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0723 15:13:54.531271 3472620 start.go:495] detecting cgroup driver to use...
	I0723 15:13:54.531304 3472620 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0723 15:13:54.531351 3472620 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0723 15:13:54.543727 3472620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0723 15:13:54.555796 3472620 docker.go:217] disabling cri-docker service (if available) ...
	I0723 15:13:54.555860 3472620 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0723 15:13:54.569340 3472620 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0723 15:13:54.581294 3472620 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0723 15:13:54.753133 3472620 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0723 15:13:54.907097 3472620 docker.go:233] disabling docker service ...
	I0723 15:13:54.907179 3472620 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0723 15:13:54.922924 3472620 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0723 15:13:54.935806 3472620 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0723 15:13:55.096113 3472620 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0723 15:13:55.252145 3472620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0723 15:13:55.268649 3472620 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0723 15:13:55.291908 3472620 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0723 15:13:55.291969 3472620 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:13:55.303238 3472620 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0723 15:13:55.303312 3472620 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:13:55.315210 3472620 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:13:55.326089 3472620 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:13:55.339014 3472620 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0723 15:13:55.348606 3472620 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:13:55.358155 3472620 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:13:55.371379 3472620 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:13:55.385859 3472620 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0723 15:13:55.400832 3472620 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0723 15:13:55.410072 3472620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:13:55.539371 3472620 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0723 15:13:56.223641 3472620 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0723 15:13:56.223785 3472620 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0723 15:13:56.233648 3472620 start.go:563] Will wait 60s for crictl version
	I0723 15:13:56.233806 3472620 ssh_runner.go:195] Run: which crictl
	I0723 15:13:56.243168 3472620 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0723 15:13:56.338106 3472620 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0723 15:13:56.338218 3472620 ssh_runner.go:195] Run: crio --version
	I0723 15:13:56.400339 3472620 ssh_runner.go:195] Run: crio --version
	I0723 15:13:56.453239 3472620 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.24.6 ...
	I0723 15:13:56.455003 3472620 cli_runner.go:164] Run: docker network inspect pause-864402 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0723 15:13:56.470092 3472620 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0723 15:13:56.473824 3472620 kubeadm.go:883] updating cluster {Name:pause-864402 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:pause-864402 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry
-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0723 15:13:56.473979 3472620 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0723 15:13:56.474051 3472620 ssh_runner.go:195] Run: sudo crictl images --output json
	I0723 15:13:56.527299 3472620 crio.go:514] all images are preloaded for cri-o runtime.
	I0723 15:13:56.527324 3472620 crio.go:433] Images already preloaded, skipping extraction
	I0723 15:13:56.527379 3472620 ssh_runner.go:195] Run: sudo crictl images --output json
	I0723 15:13:56.587248 3472620 crio.go:514] all images are preloaded for cri-o runtime.
	I0723 15:13:56.587276 3472620 cache_images.go:84] Images are preloaded, skipping loading
	I0723 15:13:56.587285 3472620 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.30.3 crio true true} ...
	I0723 15:13:56.587404 3472620 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-864402 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:pause-864402 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0723 15:13:56.587491 3472620 ssh_runner.go:195] Run: crio config
	I0723 15:13:56.681898 3472620 cni.go:84] Creating CNI manager for ""
	I0723 15:13:56.681958 3472620 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0723 15:13:56.681982 3472620 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0723 15:13:56.682017 3472620 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-864402 NodeName:pause-864402 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0723 15:13:56.682179 3472620 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-864402"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0723 15:13:56.682265 3472620 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0723 15:13:56.691778 3472620 binaries.go:44] Found k8s binaries, skipping transfer
	I0723 15:13:56.691891 3472620 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0723 15:13:56.700710 3472620 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I0723 15:13:56.719151 3472620 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0723 15:13:56.737121 3472620 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0723 15:13:56.754845 3472620 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0723 15:13:56.758824 3472620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:13:56.949133 3472620 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0723 15:13:56.977082 3472620 certs.go:68] Setting up /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/pause-864402 for IP: 192.168.76.2
	I0723 15:13:56.977101 3472620 certs.go:194] generating shared ca certs ...
	I0723 15:13:56.977117 3472620 certs.go:226] acquiring lock for ca certs: {Name:mk9061483da1430ff0fd8e32bc77025286e53111 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:13:56.977248 3472620 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19319-3317687/.minikube/ca.key
	I0723 15:13:56.977289 3472620 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19319-3317687/.minikube/proxy-client-ca.key
	I0723 15:13:56.977297 3472620 certs.go:256] generating profile certs ...
	I0723 15:13:56.977394 3472620 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/pause-864402/client.key
	I0723 15:13:56.977462 3472620 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/pause-864402/apiserver.key.19941d83
	I0723 15:13:56.977504 3472620 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/pause-864402/proxy-client.key
	I0723 15:13:56.977611 3472620 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-3317687/.minikube/certs/3323080.pem (1338 bytes)
	W0723 15:13:56.977637 3472620 certs.go:480] ignoring /home/jenkins/minikube-integration/19319-3317687/.minikube/certs/3323080_empty.pem, impossibly tiny 0 bytes
	I0723 15:13:56.977645 3472620 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-3317687/.minikube/certs/ca-key.pem (1679 bytes)
	I0723 15:13:56.977669 3472620 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-3317687/.minikube/certs/ca.pem (1082 bytes)
	I0723 15:13:56.977697 3472620 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-3317687/.minikube/certs/cert.pem (1123 bytes)
	I0723 15:13:56.977718 3472620 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-3317687/.minikube/certs/key.pem (1679 bytes)
	I0723 15:13:56.977759 3472620 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-3317687/.minikube/files/etc/ssl/certs/33230802.pem (1708 bytes)
	I0723 15:13:56.978384 3472620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3317687/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0723 15:13:57.028558 3472620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3317687/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0723 15:13:57.070000 3472620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3317687/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0723 15:13:57.103231 3472620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3317687/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0723 15:13:57.132732 3472620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/pause-864402/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0723 15:13:57.161229 3472620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/pause-864402/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0723 15:13:57.220308 3472620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/pause-864402/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0723 15:13:57.319777 3472620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/pause-864402/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0723 15:13:57.569484 3472620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3317687/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0723 15:13:57.632265 3472620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3317687/.minikube/certs/3323080.pem --> /usr/share/ca-certificates/3323080.pem (1338 bytes)
	I0723 15:13:57.720923 3472620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3317687/.minikube/files/etc/ssl/certs/33230802.pem --> /usr/share/ca-certificates/33230802.pem (1708 bytes)
	I0723 15:13:57.937972 3472620 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0723 15:13:57.989494 3472620 ssh_runner.go:195] Run: openssl version
	I0723 15:13:58.073449 3472620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0723 15:13:58.159897 3472620 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:13:58.190699 3472620 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 23 14:27 /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:13:58.190766 3472620 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:13:58.243316 3472620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0723 15:13:58.259885 3472620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3323080.pem && ln -fs /usr/share/ca-certificates/3323080.pem /etc/ssl/certs/3323080.pem"
	I0723 15:13:58.270598 3472620 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3323080.pem
	I0723 15:13:58.274548 3472620 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 23 14:38 /usr/share/ca-certificates/3323080.pem
	I0723 15:13:58.274607 3472620 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3323080.pem
	I0723 15:13:58.281784 3472620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3323080.pem /etc/ssl/certs/51391683.0"
	I0723 15:13:58.294053 3472620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/33230802.pem && ln -fs /usr/share/ca-certificates/33230802.pem /etc/ssl/certs/33230802.pem"
	I0723 15:13:58.304185 3472620 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/33230802.pem
	I0723 15:13:58.307798 3472620 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 23 14:38 /usr/share/ca-certificates/33230802.pem
	I0723 15:13:58.307865 3472620 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/33230802.pem
	I0723 15:13:58.314838 3472620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/33230802.pem /etc/ssl/certs/3ec20f2e.0"
	I0723 15:13:58.326602 3472620 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0723 15:13:58.333897 3472620 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0723 15:13:58.343025 3472620 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0723 15:13:58.355954 3472620 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0723 15:13:58.364341 3472620 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0723 15:13:58.377147 3472620 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0723 15:13:58.399156 3472620 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0723 15:13:58.417388 3472620 kubeadm.go:392] StartCluster: {Name:pause-864402 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:pause-864402 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-cr
eds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 15:13:58.417511 3472620 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0723 15:13:58.417589 3472620 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0723 15:13:58.481361 3472620 cri.go:89] found id: "5bb451f2e0e162d68efcb7265c7e11da69737651fdf37192d43ee0611a4c436b"
	I0723 15:13:58.481380 3472620 cri.go:89] found id: "394ad9f8053742d9dc87a97077c6f8e44517d4ee9df66d07cff51474e383965d"
	I0723 15:13:58.481386 3472620 cri.go:89] found id: "99c9ceed64ad3d10f975b558f86293c29bbf182265232485f460e6b789992171"
	I0723 15:13:58.481390 3472620 cri.go:89] found id: "8d96fb8d9b0ca0dd8ff886b3a4e820fed4fa1288ac3598a06d3c7f8afe619e99"
	I0723 15:13:58.481393 3472620 cri.go:89] found id: "13ab1eaf0c3e0468ddfefff14c5090a6b45a71823d387f7c663baa763b43dbda"
	I0723 15:13:58.481396 3472620 cri.go:89] found id: "682d8a34beb32bda7d6f2593fd0f524b5b14574b91f62b4086d24ed7891e3ea6"
	I0723 15:13:58.481399 3472620 cri.go:89] found id: "e092a00049200538fdd8d04a30c9fe7e039a5edac85687675878c085dc5d14a0"
	I0723 15:13:58.481403 3472620 cri.go:89] found id: "f69c46d26d552fcd64207b6dd16a65aa163aec359db25d9f540d40ebcf724a21"
	I0723 15:13:58.481406 3472620 cri.go:89] found id: "2ab671d3c2c489377ad82bf8c5a3897ffe297d7fa42d663d3b6a81f5edae2f0c"
	I0723 15:13:58.481415 3472620 cri.go:89] found id: "42e151d36a512ec37fa96e308c539b09fec09053a664e3849680f927f6452a2c"
	I0723 15:13:58.481418 3472620 cri.go:89] found id: "56a202d711bd0cb7058cc944a7bc7f5e3da05738e3963343fd9ed69ce506f333"
	I0723 15:13:58.481421 3472620 cri.go:89] found id: "0de9dcbc9d41bf6080842f82238b88f2e50689901abb8b333766b2dfa0e63a55"
	I0723 15:13:58.481424 3472620 cri.go:89] found id: "b203059c3c1438dfd31a735bfcf31379d96d975eec700982ccca07d52f6dc740"
	I0723 15:13:58.481428 3472620 cri.go:89] found id: "ea06788641402edebc224a5a98dec557723cdf365b7fcd0c05b20e74207c02ed"
	I0723 15:13:58.481433 3472620 cri.go:89] found id: "de67f38f84571a6ff8218d47a832510c1ddcb92ac9d51bfff7edb8e4d82ca409"
	I0723 15:13:58.481436 3472620 cri.go:89] found id: "64765c244fad2609e102831ecd2f0bc2856d178e82298ec0f01ed7d1ea631562"
	I0723 15:13:58.481439 3472620 cri.go:89] found id: ""
	I0723 15:13:58.481497 3472620 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-864402
helpers_test.go:235: (dbg) docker inspect pause-864402:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8f70c9e72dce33a97e6652883611bbf9eba987d3f9946750ddd5276690d2a98b",
	        "Created": "2024-07-23T15:12:42.62184285Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 3464571,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-07-23T15:12:42.81302532Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:71a7ac3dcc1f66f9b927c200bbaca5de093c77584a8e2cceb20f7c37b7028780",
	        "ResolvConfPath": "/var/lib/docker/containers/8f70c9e72dce33a97e6652883611bbf9eba987d3f9946750ddd5276690d2a98b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8f70c9e72dce33a97e6652883611bbf9eba987d3f9946750ddd5276690d2a98b/hostname",
	        "HostsPath": "/var/lib/docker/containers/8f70c9e72dce33a97e6652883611bbf9eba987d3f9946750ddd5276690d2a98b/hosts",
	        "LogPath": "/var/lib/docker/containers/8f70c9e72dce33a97e6652883611bbf9eba987d3f9946750ddd5276690d2a98b/8f70c9e72dce33a97e6652883611bbf9eba987d3f9946750ddd5276690d2a98b-json.log",
	        "Name": "/pause-864402",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-864402:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-864402",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c0268c37d37deb8ea02c87254d4984947bd4a6d2871fecfbfebcb475f25a7fb2-init/diff:/var/lib/docker/overlay2/cc3f8b49bb50b989dafe94ead705091dcc80edbdd409e161d5028bc93b57b742/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c0268c37d37deb8ea02c87254d4984947bd4a6d2871fecfbfebcb475f25a7fb2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c0268c37d37deb8ea02c87254d4984947bd4a6d2871fecfbfebcb475f25a7fb2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c0268c37d37deb8ea02c87254d4984947bd4a6d2871fecfbfebcb475f25a7fb2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-864402",
	                "Source": "/var/lib/docker/volumes/pause-864402/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-864402",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-864402",
	                "name.minikube.sigs.k8s.io": "pause-864402",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9b3305b54290d3b476e2f31e6071b9b245d67f794d28755e8290893bdd3f3af1",
	            "SandboxKey": "/var/run/docker/netns/9b3305b54290",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "37362"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "37363"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "37366"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "37364"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "37365"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-864402": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null,
	                    "NetworkID": "ed2707062fcb8c2560a93f07dbc68eece32d47deb7abc78e42ea1d714fb25a36",
	                    "EndpointID": "0af19ba7e6b67c3b763faa7368f687d1e147a286be83c635fd6423c6fa68e35b",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-864402",
	                        "8f70c9e72dce"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-864402 -n pause-864402
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p pause-864402 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p pause-864402 logs -n 25: (2.569499479s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p scheduled-stop-045914       | scheduled-stop-045914       | jenkins | v1.33.1 | 23 Jul 24 15:11 UTC |                     |
	|         | --schedule 5m                  |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-045914       | scheduled-stop-045914       | jenkins | v1.33.1 | 23 Jul 24 15:11 UTC |                     |
	|         | --schedule 5m                  |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-045914       | scheduled-stop-045914       | jenkins | v1.33.1 | 23 Jul 24 15:11 UTC |                     |
	|         | --schedule 15s                 |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-045914       | scheduled-stop-045914       | jenkins | v1.33.1 | 23 Jul 24 15:11 UTC |                     |
	|         | --schedule 15s                 |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-045914       | scheduled-stop-045914       | jenkins | v1.33.1 | 23 Jul 24 15:11 UTC |                     |
	|         | --schedule 15s                 |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-045914       | scheduled-stop-045914       | jenkins | v1.33.1 | 23 Jul 24 15:11 UTC | 23 Jul 24 15:11 UTC |
	|         | --cancel-scheduled             |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-045914       | scheduled-stop-045914       | jenkins | v1.33.1 | 23 Jul 24 15:11 UTC |                     |
	|         | --schedule 15s                 |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-045914       | scheduled-stop-045914       | jenkins | v1.33.1 | 23 Jul 24 15:11 UTC |                     |
	|         | --schedule 15s                 |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-045914       | scheduled-stop-045914       | jenkins | v1.33.1 | 23 Jul 24 15:11 UTC | 23 Jul 24 15:11 UTC |
	|         | --schedule 15s                 |                             |         |         |                     |                     |
	| delete  | -p scheduled-stop-045914       | scheduled-stop-045914       | jenkins | v1.33.1 | 23 Jul 24 15:12 UTC | 23 Jul 24 15:12 UTC |
	| start   | -p insufficient-storage-028797 | insufficient-storage-028797 | jenkins | v1.33.1 | 23 Jul 24 15:12 UTC |                     |
	|         | --memory=2048 --output=json    |                             |         |         |                     |                     |
	|         | --wait=true --driver=docker    |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| delete  | -p insufficient-storage-028797 | insufficient-storage-028797 | jenkins | v1.33.1 | 23 Jul 24 15:12 UTC | 23 Jul 24 15:12 UTC |
	| start   | -p pause-864402 --memory=2048  | pause-864402                | jenkins | v1.33.1 | 23 Jul 24 15:12 UTC | 23 Jul 24 15:13 UTC |
	|         | --install-addons=false         |                             |         |         |                     |                     |
	|         | --wait=all --driver=docker     |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| start   | -p NoKubernetes-231608         | NoKubernetes-231608         | jenkins | v1.33.1 | 23 Jul 24 15:12 UTC |                     |
	|         | --no-kubernetes                |                             |         |         |                     |                     |
	|         | --kubernetes-version=1.20      |                             |         |         |                     |                     |
	|         | --driver=docker                |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| start   | -p NoKubernetes-231608         | NoKubernetes-231608         | jenkins | v1.33.1 | 23 Jul 24 15:12 UTC | 23 Jul 24 15:13 UTC |
	|         | --driver=docker                |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| start   | -p NoKubernetes-231608         | NoKubernetes-231608         | jenkins | v1.33.1 | 23 Jul 24 15:13 UTC | 23 Jul 24 15:13 UTC |
	|         | --no-kubernetes                |                             |         |         |                     |                     |
	|         | --driver=docker                |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| delete  | -p NoKubernetes-231608         | NoKubernetes-231608         | jenkins | v1.33.1 | 23 Jul 24 15:13 UTC | 23 Jul 24 15:13 UTC |
	| start   | -p NoKubernetes-231608         | NoKubernetes-231608         | jenkins | v1.33.1 | 23 Jul 24 15:13 UTC | 23 Jul 24 15:13 UTC |
	|         | --no-kubernetes                |                             |         |         |                     |                     |
	|         | --driver=docker                |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| ssh     | -p NoKubernetes-231608 sudo    | NoKubernetes-231608         | jenkins | v1.33.1 | 23 Jul 24 15:13 UTC |                     |
	|         | systemctl is-active --quiet    |                             |         |         |                     |                     |
	|         | service kubelet                |                             |         |         |                     |                     |
	| stop    | -p NoKubernetes-231608         | NoKubernetes-231608         | jenkins | v1.33.1 | 23 Jul 24 15:13 UTC | 23 Jul 24 15:13 UTC |
	| start   | -p NoKubernetes-231608         | NoKubernetes-231608         | jenkins | v1.33.1 | 23 Jul 24 15:13 UTC | 23 Jul 24 15:13 UTC |
	|         | --driver=docker                |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| start   | -p pause-864402                | pause-864402                | jenkins | v1.33.1 | 23 Jul 24 15:13 UTC | 23 Jul 24 15:14 UTC |
	|         | --alsologtostderr              |                             |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| ssh     | -p NoKubernetes-231608 sudo    | NoKubernetes-231608         | jenkins | v1.33.1 | 23 Jul 24 15:13 UTC |                     |
	|         | systemctl is-active --quiet    |                             |         |         |                     |                     |
	|         | service kubelet                |                             |         |         |                     |                     |
	| delete  | -p NoKubernetes-231608         | NoKubernetes-231608         | jenkins | v1.33.1 | 23 Jul 24 15:13 UTC | 23 Jul 24 15:13 UTC |
	| start   | -p missing-upgrade-018960      | minikube                    | jenkins | v1.26.0 | 23 Jul 24 15:13 UTC |                     |
	|         | --memory=2200 --driver=docker  |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	|---------|--------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/23 15:13:55
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.18.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0723 15:13:55.151570 3473787 out.go:296] Setting OutFile to fd 1 ...
	I0723 15:13:55.151679 3473787 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0723 15:13:55.151683 3473787 out.go:309] Setting ErrFile to fd 2...
	I0723 15:13:55.151687 3473787 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0723 15:13:55.151922 3473787 root.go:329] Updating PATH: /home/jenkins/minikube-integration/19319-3317687/.minikube/bin
	I0723 15:13:55.152206 3473787 out.go:303] Setting JSON to false
	I0723 15:13:55.153109 3473787 start.go:115] hostinfo: {"hostname":"ip-172-31-21-244","uptime":86182,"bootTime":1721661454,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0723 15:13:55.153175 3473787 start.go:125] virtualization:  
	I0723 15:13:55.156792 3473787 out.go:177] * [missing-upgrade-018960] minikube v1.26.0 on Ubuntu 20.04 (arm64)
	I0723 15:13:55.159558 3473787 out.go:177]   - MINIKUBE_LOCATION=19319
	I0723 15:13:55.159609 3473787 notify.go:193] Checking for updates...
	I0723 15:13:55.164791 3473787 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0723 15:13:55.167372 3473787 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19319-3317687/kubeconfig
	I0723 15:13:55.169958 3473787 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19319-3317687/.minikube
	I0723 15:13:55.172686 3473787 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0723 15:13:55.175244 3473787 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0723 15:13:55.178281 3473787 config.go:178] Loaded profile config "pause-864402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 15:13:55.178327 3473787 driver.go:360] Setting default libvirt URI to qemu:///system
	I0723 15:13:55.218013 3473787 docker.go:137] docker version: linux-27.1.0
	I0723 15:13:55.218100 3473787 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0723 15:13:55.234969 3473787 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-3317687/.minikube/last_update_check: {Name:mk486b32d34537fa2821a17e03a096a80d26a8e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:13:55.238058 3473787 out.go:177] * minikube 1.33.1 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.33.1
	I0723 15:13:55.240642 3473787 out.go:177] * To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	I0723 15:13:55.299213 3473787 info.go:265] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:53 SystemTime:2024-07-23 15:13:55.287882076 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1065-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.0]] Warnings:<nil>}}
	I0723 15:13:55.299304 3473787 docker.go:254] overlay module found
	I0723 15:13:55.302392 3473787 out.go:177] * Using the docker driver based on user configuration
	I0723 15:13:55.305714 3473787 start.go:284] selected driver: docker
	I0723 15:13:55.305734 3473787 start.go:805] validating driver "docker" against <nil>
	I0723 15:13:55.305754 3473787 start.go:816] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0723 15:13:55.306345 3473787 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0723 15:13:55.398990 3473787 info.go:265] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:53 SystemTime:2024-07-23 15:13:55.388006739 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1065-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.0]] Warnings:<nil>}}
	I0723 15:13:55.399104 3473787 start_flags.go:296] no existing cluster config was found, will generate one from the flags 
	I0723 15:13:55.399273 3473787 start_flags.go:835] Wait components to verify : map[apiserver:true system_pods:true]
	I0723 15:13:55.402368 3473787 out.go:177] * Using Docker driver with root privileges
	I0723 15:13:55.404986 3473787 cni.go:95] Creating CNI manager for ""
	I0723 15:13:55.404999 3473787 cni.go:162] "docker" driver + crio runtime found, recommending kindnet
	I0723 15:13:55.405008 3473787 start_flags.go:305] Found "CNI" CNI - setting NetworkPlugin=cni
	I0723 15:13:55.405017 3473787 start_flags.go:310] config:
	{Name:missing-upgrade-018960 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:missing-upgrade-018960 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0723 15:13:55.407936 3473787 out.go:177] * Starting control plane node missing-upgrade-018960 in cluster missing-upgrade-018960
	I0723 15:13:55.411587 3473787 cache.go:120] Beginning downloading kic base image for docker with crio
	I0723 15:13:55.414192 3473787 out.go:177] * Pulling base image ...
	I0723 15:13:55.416661 3473787 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime crio
	I0723 15:13:55.416821 3473787 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 in local docker daemon
	I0723 15:13:55.433054 3473787 cache.go:147] Downloading gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 to local cache
	I0723 15:13:55.433267 3473787 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 in local cache directory
	I0723 15:13:55.433828 3473787 image.go:119] Writing gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 to local cache
	I0723 15:13:55.475352 3473787 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.1/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-arm64.tar.lz4
	I0723 15:13:55.475365 3473787 cache.go:57] Caching tarball of preloaded images
	I0723 15:13:55.475528 3473787 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime crio
	I0723 15:13:55.478298 3473787 out.go:177] * Downloading Kubernetes v1.24.1 preload ...
	I0723 15:13:53.773779 3472620 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0723 15:13:53.773804 3472620 machine.go:97] duration metric: took 6.410154461s to provisionDockerMachine
	I0723 15:13:53.773817 3472620 start.go:293] postStartSetup for "pause-864402" (driver="docker")
	I0723 15:13:53.773833 3472620 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0723 15:13:53.773906 3472620 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0723 15:13:53.773973 3472620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-864402
	I0723 15:13:53.791890 3472620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37362 SSHKeyPath:/home/jenkins/minikube-integration/19319-3317687/.minikube/machines/pause-864402/id_rsa Username:docker}
	I0723 15:13:53.883547 3472620 ssh_runner.go:195] Run: cat /etc/os-release
	I0723 15:13:53.887349 3472620 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0723 15:13:53.887384 3472620 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0723 15:13:53.887395 3472620 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0723 15:13:53.887402 3472620 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0723 15:13:53.887412 3472620 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-3317687/.minikube/addons for local assets ...
	I0723 15:13:53.887470 3472620 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-3317687/.minikube/files for local assets ...
	I0723 15:13:53.887553 3472620 filesync.go:149] local asset: /home/jenkins/minikube-integration/19319-3317687/.minikube/files/etc/ssl/certs/33230802.pem -> 33230802.pem in /etc/ssl/certs
	I0723 15:13:53.887659 3472620 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0723 15:13:53.896377 3472620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3317687/.minikube/files/etc/ssl/certs/33230802.pem --> /etc/ssl/certs/33230802.pem (1708 bytes)
	I0723 15:13:53.921262 3472620 start.go:296] duration metric: took 147.429547ms for postStartSetup
	I0723 15:13:53.921344 3472620 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0723 15:13:53.921398 3472620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-864402
	I0723 15:13:53.937806 3472620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37362 SSHKeyPath:/home/jenkins/minikube-integration/19319-3317687/.minikube/machines/pause-864402/id_rsa Username:docker}
	I0723 15:13:54.031674 3472620 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0723 15:13:54.037551 3472620 fix.go:56] duration metric: took 6.696172806s for fixHost
	I0723 15:13:54.037590 3472620 start.go:83] releasing machines lock for "pause-864402", held for 6.696234353s
	I0723 15:13:54.037667 3472620 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-864402
	I0723 15:13:54.078382 3472620 ssh_runner.go:195] Run: cat /version.json
	I0723 15:13:54.078440 3472620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-864402
	I0723 15:13:54.078800 3472620 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0723 15:13:54.078868 3472620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-864402
	I0723 15:13:54.104406 3472620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37362 SSHKeyPath:/home/jenkins/minikube-integration/19319-3317687/.minikube/machines/pause-864402/id_rsa Username:docker}
	I0723 15:13:54.113336 3472620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37362 SSHKeyPath:/home/jenkins/minikube-integration/19319-3317687/.minikube/machines/pause-864402/id_rsa Username:docker}
	I0723 15:13:54.206580 3472620 ssh_runner.go:195] Run: systemctl --version
	I0723 15:13:54.350847 3472620 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0723 15:13:54.508782 3472620 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0723 15:13:54.513127 3472620 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0723 15:13:54.522226 3472620 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0723 15:13:54.522312 3472620 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0723 15:13:54.531250 3472620 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0723 15:13:54.531271 3472620 start.go:495] detecting cgroup driver to use...
	I0723 15:13:54.531304 3472620 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0723 15:13:54.531351 3472620 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0723 15:13:54.543727 3472620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0723 15:13:54.555796 3472620 docker.go:217] disabling cri-docker service (if available) ...
	I0723 15:13:54.555860 3472620 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0723 15:13:54.569340 3472620 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0723 15:13:54.581294 3472620 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0723 15:13:54.753133 3472620 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0723 15:13:54.907097 3472620 docker.go:233] disabling docker service ...
	I0723 15:13:54.907179 3472620 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0723 15:13:54.922924 3472620 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0723 15:13:54.935806 3472620 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0723 15:13:55.096113 3472620 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0723 15:13:55.252145 3472620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0723 15:13:55.268649 3472620 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0723 15:13:55.291908 3472620 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0723 15:13:55.291969 3472620 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:13:55.303238 3472620 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0723 15:13:55.303312 3472620 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:13:55.315210 3472620 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:13:55.326089 3472620 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:13:55.339014 3472620 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0723 15:13:55.348606 3472620 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:13:55.358155 3472620 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:13:55.371379 3472620 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:13:55.385859 3472620 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0723 15:13:55.400832 3472620 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0723 15:13:55.410072 3472620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:13:55.539371 3472620 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0723 15:13:56.223641 3472620 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0723 15:13:56.223785 3472620 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0723 15:13:56.233648 3472620 start.go:563] Will wait 60s for crictl version
	I0723 15:13:56.233806 3472620 ssh_runner.go:195] Run: which crictl
	I0723 15:13:56.243168 3472620 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0723 15:13:56.338106 3472620 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0723 15:13:56.338218 3472620 ssh_runner.go:195] Run: crio --version
	I0723 15:13:56.400339 3472620 ssh_runner.go:195] Run: crio --version
	I0723 15:13:56.453239 3472620 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.24.6 ...
	I0723 15:13:56.455003 3472620 cli_runner.go:164] Run: docker network inspect pause-864402 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0723 15:13:56.470092 3472620 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0723 15:13:56.473824 3472620 kubeadm.go:883] updating cluster {Name:pause-864402 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:pause-864402 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry
-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0723 15:13:56.473979 3472620 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0723 15:13:56.474051 3472620 ssh_runner.go:195] Run: sudo crictl images --output json
	I0723 15:13:56.527299 3472620 crio.go:514] all images are preloaded for cri-o runtime.
	I0723 15:13:56.527324 3472620 crio.go:433] Images already preloaded, skipping extraction
	I0723 15:13:56.527379 3472620 ssh_runner.go:195] Run: sudo crictl images --output json
	I0723 15:13:56.587248 3472620 crio.go:514] all images are preloaded for cri-o runtime.
	I0723 15:13:56.587276 3472620 cache_images.go:84] Images are preloaded, skipping loading
	I0723 15:13:56.587285 3472620 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.30.3 crio true true} ...
	I0723 15:13:56.587404 3472620 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-864402 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:pause-864402 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0723 15:13:56.587491 3472620 ssh_runner.go:195] Run: crio config
	I0723 15:13:56.681898 3472620 cni.go:84] Creating CNI manager for ""
	I0723 15:13:56.681958 3472620 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0723 15:13:56.681982 3472620 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0723 15:13:56.682017 3472620 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-864402 NodeName:pause-864402 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0723 15:13:56.682179 3472620 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-864402"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0723 15:13:56.682265 3472620 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0723 15:13:56.691778 3472620 binaries.go:44] Found k8s binaries, skipping transfer
	I0723 15:13:56.691891 3472620 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0723 15:13:56.700710 3472620 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I0723 15:13:56.719151 3472620 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0723 15:13:56.737121 3472620 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0723 15:13:56.754845 3472620 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0723 15:13:56.758824 3472620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:13:56.949133 3472620 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0723 15:13:56.977082 3472620 certs.go:68] Setting up /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/pause-864402 for IP: 192.168.76.2
	I0723 15:13:56.977101 3472620 certs.go:194] generating shared ca certs ...
	I0723 15:13:56.977117 3472620 certs.go:226] acquiring lock for ca certs: {Name:mk9061483da1430ff0fd8e32bc77025286e53111 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:13:56.977248 3472620 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19319-3317687/.minikube/ca.key
	I0723 15:13:56.977289 3472620 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19319-3317687/.minikube/proxy-client-ca.key
	I0723 15:13:56.977297 3472620 certs.go:256] generating profile certs ...
	I0723 15:13:56.977394 3472620 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/pause-864402/client.key
	I0723 15:13:56.977462 3472620 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/pause-864402/apiserver.key.19941d83
	I0723 15:13:56.977504 3472620 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/pause-864402/proxy-client.key
	I0723 15:13:56.977611 3472620 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-3317687/.minikube/certs/3323080.pem (1338 bytes)
	W0723 15:13:56.977637 3472620 certs.go:480] ignoring /home/jenkins/minikube-integration/19319-3317687/.minikube/certs/3323080_empty.pem, impossibly tiny 0 bytes
	I0723 15:13:56.977645 3472620 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-3317687/.minikube/certs/ca-key.pem (1679 bytes)
	I0723 15:13:56.977669 3472620 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-3317687/.minikube/certs/ca.pem (1082 bytes)
	I0723 15:13:56.977697 3472620 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-3317687/.minikube/certs/cert.pem (1123 bytes)
	I0723 15:13:56.977718 3472620 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-3317687/.minikube/certs/key.pem (1679 bytes)
	I0723 15:13:56.977759 3472620 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-3317687/.minikube/files/etc/ssl/certs/33230802.pem (1708 bytes)
	I0723 15:13:56.978384 3472620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3317687/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0723 15:13:57.028558 3472620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3317687/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0723 15:13:57.070000 3472620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3317687/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0723 15:13:57.103231 3472620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3317687/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0723 15:13:57.132732 3472620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/pause-864402/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0723 15:13:57.161229 3472620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/pause-864402/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0723 15:13:57.220308 3472620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/pause-864402/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0723 15:13:57.319777 3472620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/pause-864402/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0723 15:13:57.569484 3472620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3317687/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0723 15:13:57.632265 3472620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3317687/.minikube/certs/3323080.pem --> /usr/share/ca-certificates/3323080.pem (1338 bytes)
	I0723 15:13:57.720923 3472620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3317687/.minikube/files/etc/ssl/certs/33230802.pem --> /usr/share/ca-certificates/33230802.pem (1708 bytes)
	I0723 15:13:57.937972 3472620 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0723 15:13:57.989494 3472620 ssh_runner.go:195] Run: openssl version
	I0723 15:13:58.073449 3472620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0723 15:13:58.159897 3472620 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:13:58.190699 3472620 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 23 14:27 /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:13:58.190766 3472620 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:13:58.243316 3472620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0723 15:13:58.259885 3472620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3323080.pem && ln -fs /usr/share/ca-certificates/3323080.pem /etc/ssl/certs/3323080.pem"
	I0723 15:13:58.270598 3472620 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3323080.pem
	I0723 15:13:58.274548 3472620 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 23 14:38 /usr/share/ca-certificates/3323080.pem
	I0723 15:13:58.274607 3472620 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3323080.pem
	I0723 15:13:58.281784 3472620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3323080.pem /etc/ssl/certs/51391683.0"
	I0723 15:13:58.294053 3472620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/33230802.pem && ln -fs /usr/share/ca-certificates/33230802.pem /etc/ssl/certs/33230802.pem"
	I0723 15:13:58.304185 3472620 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/33230802.pem
	I0723 15:13:58.307798 3472620 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 23 14:38 /usr/share/ca-certificates/33230802.pem
	I0723 15:13:58.307865 3472620 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/33230802.pem
	I0723 15:13:58.314838 3472620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/33230802.pem /etc/ssl/certs/3ec20f2e.0"
	I0723 15:13:58.326602 3472620 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0723 15:13:58.333897 3472620 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0723 15:13:58.343025 3472620 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0723 15:13:58.355954 3472620 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0723 15:13:58.364341 3472620 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0723 15:13:58.377147 3472620 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0723 15:13:58.399156 3472620 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0723 15:13:58.417388 3472620 kubeadm.go:392] StartCluster: {Name:pause-864402 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:pause-864402 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-cr
eds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 15:13:58.417511 3472620 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0723 15:13:58.417589 3472620 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0723 15:13:58.481361 3472620 cri.go:89] found id: "5bb451f2e0e162d68efcb7265c7e11da69737651fdf37192d43ee0611a4c436b"
	I0723 15:13:58.481380 3472620 cri.go:89] found id: "394ad9f8053742d9dc87a97077c6f8e44517d4ee9df66d07cff51474e383965d"
	I0723 15:13:58.481386 3472620 cri.go:89] found id: "99c9ceed64ad3d10f975b558f86293c29bbf182265232485f460e6b789992171"
	I0723 15:13:58.481390 3472620 cri.go:89] found id: "8d96fb8d9b0ca0dd8ff886b3a4e820fed4fa1288ac3598a06d3c7f8afe619e99"
	I0723 15:13:58.481393 3472620 cri.go:89] found id: "13ab1eaf0c3e0468ddfefff14c5090a6b45a71823d387f7c663baa763b43dbda"
	I0723 15:13:58.481396 3472620 cri.go:89] found id: "682d8a34beb32bda7d6f2593fd0f524b5b14574b91f62b4086d24ed7891e3ea6"
	I0723 15:13:58.481399 3472620 cri.go:89] found id: "e092a00049200538fdd8d04a30c9fe7e039a5edac85687675878c085dc5d14a0"
	I0723 15:13:58.481403 3472620 cri.go:89] found id: "f69c46d26d552fcd64207b6dd16a65aa163aec359db25d9f540d40ebcf724a21"
	I0723 15:13:58.481406 3472620 cri.go:89] found id: "2ab671d3c2c489377ad82bf8c5a3897ffe297d7fa42d663d3b6a81f5edae2f0c"
	I0723 15:13:58.481415 3472620 cri.go:89] found id: "42e151d36a512ec37fa96e308c539b09fec09053a664e3849680f927f6452a2c"
	I0723 15:13:58.481418 3472620 cri.go:89] found id: "56a202d711bd0cb7058cc944a7bc7f5e3da05738e3963343fd9ed69ce506f333"
	I0723 15:13:58.481421 3472620 cri.go:89] found id: "0de9dcbc9d41bf6080842f82238b88f2e50689901abb8b333766b2dfa0e63a55"
	I0723 15:13:58.481424 3472620 cri.go:89] found id: "b203059c3c1438dfd31a735bfcf31379d96d975eec700982ccca07d52f6dc740"
	I0723 15:13:58.481428 3472620 cri.go:89] found id: "ea06788641402edebc224a5a98dec557723cdf365b7fcd0c05b20e74207c02ed"
	I0723 15:13:58.481433 3472620 cri.go:89] found id: "de67f38f84571a6ff8218d47a832510c1ddcb92ac9d51bfff7edb8e4d82ca409"
	I0723 15:13:58.481436 3472620 cri.go:89] found id: "64765c244fad2609e102831ecd2f0bc2856d178e82298ec0f01ed7d1ea631562"
	I0723 15:13:58.481439 3472620 cri.go:89] found id: ""
	I0723 15:13:58.481497 3472620 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 23 15:13:57 pause-864402 crio[2610]: time="2024-07-23 15:13:57.269698864Z" level=warning msg="Allowed annotations are specified for workload []"
	Jul 23 15:13:57 pause-864402 crio[2610]: time="2024-07-23 15:13:57.465613900Z" level=info msg="Created container f69c46d26d552fcd64207b6dd16a65aa163aec359db25d9f540d40ebcf724a21: kube-system/kube-controller-manager-pause-864402/kube-controller-manager" id=647f8e93-d8e4-4696-b7c8-d3199d6123d5 name=/runtime.v1.RuntimeService/CreateContainer
	Jul 23 15:13:57 pause-864402 crio[2610]: time="2024-07-23 15:13:57.471937538Z" level=info msg="Starting container: f69c46d26d552fcd64207b6dd16a65aa163aec359db25d9f540d40ebcf724a21" id=90fa18c0-7fff-42d7-b95f-5586012215fa name=/runtime.v1.RuntimeService/StartContainer
	Jul 23 15:13:57 pause-864402 crio[2610]: time="2024-07-23 15:13:57.502339331Z" level=info msg="Started container" PID=2783 containerID=f69c46d26d552fcd64207b6dd16a65aa163aec359db25d9f540d40ebcf724a21 description=kube-system/kube-controller-manager-pause-864402/kube-controller-manager id=90fa18c0-7fff-42d7-b95f-5586012215fa name=/runtime.v1.RuntimeService/StartContainer sandboxID=82d4596098e97250469178af10ec01e1276d9b5d0692fe2b44099053665f0c3a
	Jul 23 15:13:57 pause-864402 crio[2610]: time="2024-07-23 15:13:57.708388514Z" level=info msg="Created container 394ad9f8053742d9dc87a97077c6f8e44517d4ee9df66d07cff51474e383965d: kube-system/coredns-7db6d8ff4d-8rdhr/coredns" id=ae2a572f-c3e5-4913-a905-8d6e1dbafc3d name=/runtime.v1.RuntimeService/CreateContainer
	Jul 23 15:13:57 pause-864402 crio[2610]: time="2024-07-23 15:13:57.709993848Z" level=info msg="Starting container: 394ad9f8053742d9dc87a97077c6f8e44517d4ee9df66d07cff51474e383965d" id=32d74735-81aa-45a9-aed9-2171fa3275f0 name=/runtime.v1.RuntimeService/StartContainer
	Jul 23 15:13:57 pause-864402 crio[2610]: time="2024-07-23 15:13:57.764863039Z" level=info msg="Created container e092a00049200538fdd8d04a30c9fe7e039a5edac85687675878c085dc5d14a0: kube-system/kube-proxy-vjl8m/kube-proxy" id=616292ea-b5a4-47b6-a933-e950d0c701a6 name=/runtime.v1.RuntimeService/CreateContainer
	Jul 23 15:13:57 pause-864402 crio[2610]: time="2024-07-23 15:13:57.766454367Z" level=info msg="Starting container: e092a00049200538fdd8d04a30c9fe7e039a5edac85687675878c085dc5d14a0" id=932c7296-6494-4b9f-980a-a78ef6a59f02 name=/runtime.v1.RuntimeService/StartContainer
	Jul 23 15:13:57 pause-864402 crio[2610]: time="2024-07-23 15:13:57.767004655Z" level=info msg="Created container 5bb451f2e0e162d68efcb7265c7e11da69737651fdf37192d43ee0611a4c436b: kube-system/kube-scheduler-pause-864402/kube-scheduler" id=9750343b-9960-4f3a-b352-2b50bbca1f97 name=/runtime.v1.RuntimeService/CreateContainer
	Jul 23 15:13:57 pause-864402 crio[2610]: time="2024-07-23 15:13:57.767413692Z" level=info msg="Starting container: 5bb451f2e0e162d68efcb7265c7e11da69737651fdf37192d43ee0611a4c436b" id=d076ce5c-87ec-4d65-883e-69ac2d6c6c48 name=/runtime.v1.RuntimeService/StartContainer
	Jul 23 15:13:57 pause-864402 crio[2610]: time="2024-07-23 15:13:57.783054619Z" level=info msg="Created container 682d8a34beb32bda7d6f2593fd0f524b5b14574b91f62b4086d24ed7891e3ea6: kube-system/coredns-7db6d8ff4d-cw9s5/coredns" id=9f66e07a-4cf5-4660-89fc-ba3f7c2351b1 name=/runtime.v1.RuntimeService/CreateContainer
	Jul 23 15:13:57 pause-864402 crio[2610]: time="2024-07-23 15:13:57.784878145Z" level=info msg="Starting container: 682d8a34beb32bda7d6f2593fd0f524b5b14574b91f62b4086d24ed7891e3ea6" id=601ea9f1-093f-431c-807f-c1986dd8a41c name=/runtime.v1.RuntimeService/StartContainer
	Jul 23 15:13:57 pause-864402 crio[2610]: time="2024-07-23 15:13:57.795145741Z" level=info msg="Started container" PID=2816 containerID=e092a00049200538fdd8d04a30c9fe7e039a5edac85687675878c085dc5d14a0 description=kube-system/kube-proxy-vjl8m/kube-proxy id=932c7296-6494-4b9f-980a-a78ef6a59f02 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5953e88cccb50d35924fc6530fe7e19f24d6ba8159af17501564df01065dce2a
	Jul 23 15:13:57 pause-864402 crio[2610]: time="2024-07-23 15:13:57.821625082Z" level=info msg="Started container" PID=2929 containerID=394ad9f8053742d9dc87a97077c6f8e44517d4ee9df66d07cff51474e383965d description=kube-system/coredns-7db6d8ff4d-8rdhr/coredns id=32d74735-81aa-45a9-aed9-2171fa3275f0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b823c1ea53fb3194db4599042c018f8ce3acfe7835dbf7685b888fe88bfb67cd
	Jul 23 15:13:57 pause-864402 crio[2610]: time="2024-07-23 15:13:57.824323411Z" level=info msg="Started container" PID=2912 containerID=5bb451f2e0e162d68efcb7265c7e11da69737651fdf37192d43ee0611a4c436b description=kube-system/kube-scheduler-pause-864402/kube-scheduler id=d076ce5c-87ec-4d65-883e-69ac2d6c6c48 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f0f78f96174f739f36ff60205abc1e8fa8c0cef7060d3c9fe738e48afc6050db
	Jul 23 15:13:57 pause-864402 crio[2610]: time="2024-07-23 15:13:57.826483637Z" level=info msg="Created container 13ab1eaf0c3e0468ddfefff14c5090a6b45a71823d387f7c663baa763b43dbda: kube-system/kindnet-tzchc/kindnet-cni" id=f0be5b59-7c29-414b-b340-2ca8f7f0e43c name=/runtime.v1.RuntimeService/CreateContainer
	Jul 23 15:13:57 pause-864402 crio[2610]: time="2024-07-23 15:13:57.827742475Z" level=info msg="Starting container: 13ab1eaf0c3e0468ddfefff14c5090a6b45a71823d387f7c663baa763b43dbda" id=499602d9-0d4f-4507-ab37-6354af351918 name=/runtime.v1.RuntimeService/StartContainer
	Jul 23 15:13:57 pause-864402 crio[2610]: time="2024-07-23 15:13:57.860958495Z" level=info msg="Created container 99c9ceed64ad3d10f975b558f86293c29bbf182265232485f460e6b789992171: kube-system/kube-apiserver-pause-864402/kube-apiserver" id=99470ce5-4ff1-4b25-8f5c-b9b1d3228356 name=/runtime.v1.RuntimeService/CreateContainer
	Jul 23 15:13:57 pause-864402 crio[2610]: time="2024-07-23 15:13:57.861597440Z" level=info msg="Starting container: 99c9ceed64ad3d10f975b558f86293c29bbf182265232485f460e6b789992171" id=dbcc5edb-e4c4-46f4-bbe0-9cc536f62fae name=/runtime.v1.RuntimeService/StartContainer
	Jul 23 15:13:57 pause-864402 crio[2610]: time="2024-07-23 15:13:57.881209705Z" level=info msg="Started container" PID=2835 containerID=682d8a34beb32bda7d6f2593fd0f524b5b14574b91f62b4086d24ed7891e3ea6 description=kube-system/coredns-7db6d8ff4d-cw9s5/coredns id=601ea9f1-093f-431c-807f-c1986dd8a41c name=/runtime.v1.RuntimeService/StartContainer sandboxID=8c597616dae659012d895bcfd524f7efb69bd753e335d0448311cf212f6eb3dd
	Jul 23 15:13:57 pause-864402 crio[2610]: time="2024-07-23 15:13:57.888384409Z" level=info msg="Started container" PID=2889 containerID=99c9ceed64ad3d10f975b558f86293c29bbf182265232485f460e6b789992171 description=kube-system/kube-apiserver-pause-864402/kube-apiserver id=dbcc5edb-e4c4-46f4-bbe0-9cc536f62fae name=/runtime.v1.RuntimeService/StartContainer sandboxID=81dad9efc6edc7e0ea5e57ae152c4661614b90f808894dcb2f81fada4f35228a
	Jul 23 15:13:57 pause-864402 crio[2610]: time="2024-07-23 15:13:57.897552988Z" level=info msg="Started container" PID=2880 containerID=13ab1eaf0c3e0468ddfefff14c5090a6b45a71823d387f7c663baa763b43dbda description=kube-system/kindnet-tzchc/kindnet-cni id=499602d9-0d4f-4507-ab37-6354af351918 name=/runtime.v1.RuntimeService/StartContainer sandboxID=33a658016d0639adc1c491fc1eddfb7d3775590ecad4db5f60636ebde63724d5
	Jul 23 15:13:58 pause-864402 crio[2610]: time="2024-07-23 15:13:58.078675685Z" level=info msg="Created container 8d96fb8d9b0ca0dd8ff886b3a4e820fed4fa1288ac3598a06d3c7f8afe619e99: kube-system/etcd-pause-864402/etcd" id=fad10261-1cb5-4a02-ba44-49b1477f4a50 name=/runtime.v1.RuntimeService/CreateContainer
	Jul 23 15:13:58 pause-864402 crio[2610]: time="2024-07-23 15:13:58.079317461Z" level=info msg="Starting container: 8d96fb8d9b0ca0dd8ff886b3a4e820fed4fa1288ac3598a06d3c7f8afe619e99" id=4425f09a-69ee-42f5-a5aa-6f1d52882b32 name=/runtime.v1.RuntimeService/StartContainer
	Jul 23 15:13:58 pause-864402 crio[2610]: time="2024-07-23 15:13:58.097674605Z" level=info msg="Started container" PID=2928 containerID=8d96fb8d9b0ca0dd8ff886b3a4e820fed4fa1288ac3598a06d3c7f8afe619e99 description=kube-system/etcd-pause-864402/etcd id=4425f09a-69ee-42f5-a5aa-6f1d52882b32 name=/runtime.v1.RuntimeService/StartContainer sandboxID=aa8d32346823da69d2749bba359150c88507f142a04127ea28d61e5db92d6d94
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	5bb451f2e0e16       d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355                                     10 seconds ago       Running             kube-scheduler            1                   f0f78f96174f7       kube-scheduler-pause-864402
	394ad9f805374       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93                                     10 seconds ago       Running             coredns                   1                   b823c1ea53fb3       coredns-7db6d8ff4d-8rdhr
	99c9ceed64ad3       61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca                                     10 seconds ago       Running             kube-apiserver            1                   81dad9efc6edc       kube-apiserver-pause-864402
	8d96fb8d9b0ca       014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd                                     10 seconds ago       Running             etcd                      1                   aa8d32346823d       etcd-pause-864402
	13ab1eaf0c3e0       f42786f8afd2214fc59fbf9a26531806f562488d4a7d7a31e8b5e9ff6289b800                                     11 seconds ago       Running             kindnet-cni               1                   33a658016d063       kindnet-tzchc
	682d8a34beb32       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93                                     11 seconds ago       Running             coredns                   1                   8c597616dae65       coredns-7db6d8ff4d-cw9s5
	e092a00049200       2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be                                     11 seconds ago       Running             kube-proxy                1                   5953e88cccb50       kube-proxy-vjl8m
	f69c46d26d552       8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a                                     11 seconds ago       Running             kube-controller-manager   1                   82d4596098e97       kube-controller-manager-pause-864402
	2ab671d3c2c48       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93                                     24 seconds ago       Exited              coredns                   0                   b823c1ea53fb3       coredns-7db6d8ff4d-8rdhr
	42e151d36a512       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93                                     24 seconds ago       Exited              coredns                   0                   8c597616dae65       coredns-7db6d8ff4d-cw9s5
	56a202d711bd0       docker.io/kindest/kindnetd@sha256:14100a3a7aca6cad3de3f26ee342ad937ca7d2844b1983d3baa7bf5f491baa7a   36 seconds ago       Exited              kindnet-cni               0                   33a658016d063       kindnet-tzchc
	0de9dcbc9d41b       2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be                                     38 seconds ago       Exited              kube-proxy                0                   5953e88cccb50       kube-proxy-vjl8m
	b203059c3c143       014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd                                     About a minute ago   Exited              etcd                      0                   aa8d32346823d       etcd-pause-864402
	ea06788641402       8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a                                     About a minute ago   Exited              kube-controller-manager   0                   82d4596098e97       kube-controller-manager-pause-864402
	de67f38f84571       61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca                                     About a minute ago   Exited              kube-apiserver            0                   81dad9efc6edc       kube-apiserver-pause-864402
	64765c244fad2       d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355                                     About a minute ago   Exited              kube-scheduler            0                   f0f78f96174f7       kube-scheduler-pause-864402
	
	
	==> coredns [2ab671d3c2c489377ad82bf8c5a3897ffe297d7fa42d663d3b6a81f5edae2f0c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:52568 - 11110 "HINFO IN 7062363414147552866.5651338380928459855. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.017081376s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [394ad9f8053742d9dc87a97077c6f8e44517d4ee9df66d07cff51474e383965d] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:40709 - 7715 "HINFO IN 7580601572474393587.5889626321627148573. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014420392s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [42e151d36a512ec37fa96e308c539b09fec09053a664e3849680f927f6452a2c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:37900 - 22275 "HINFO IN 5039388286640406195.5983277421212417471. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015056273s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [682d8a34beb32bda7d6f2593fd0f524b5b14574b91f62b4086d24ed7891e3ea6] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:35588 - 901 "HINFO IN 7215170534647923444.6385224591540909521. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.042996248s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               pause-864402
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-864402
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8c09ef4366b737aec8aaa2bbc590bde27da814a6
	                    minikube.k8s.io/name=pause-864402
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_23T15_13_16_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 23 Jul 2024 15:13:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-864402
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 23 Jul 2024 15:14:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 23 Jul 2024 15:13:47 +0000   Tue, 23 Jul 2024 15:13:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 23 Jul 2024 15:13:47 +0000   Tue, 23 Jul 2024 15:13:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 23 Jul 2024 15:13:47 +0000   Tue, 23 Jul 2024 15:13:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 23 Jul 2024 15:13:47 +0000   Tue, 23 Jul 2024 15:13:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-864402
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	System Info:
	  Machine ID:                 995abe7ee3c6458997f438010e42ea21
	  System UUID:                a9197847-48c0-415a-ba8a-c9f0f4e811c3
	  Boot ID:                    95e04985-bf92-47a1-9b5b-7f09371b9e30
	  Kernel Version:             5.15.0-1065-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-8rdhr                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     39s
	  kube-system                 coredns-7db6d8ff4d-cw9s5                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     39s
	  kube-system                 etcd-pause-864402                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         52s
	  kube-system                 kindnet-tzchc                           100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      40s
	  kube-system                 kube-apiserver-pause-864402             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         52s
	  kube-system                 kube-controller-manager-pause-864402    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         53s
	  kube-system                 kube-proxy-vjl8m                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         40s
	  kube-system                 kube-scheduler-pause-864402             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%!)(MISSING)  100m (5%!)(MISSING)
	  memory             290Mi (3%!)(MISSING)  390Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 38s                kube-proxy       
	  Normal  Starting                 3s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  64s (x8 over 64s)  kubelet          Node pause-864402 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    64s (x8 over 64s)  kubelet          Node pause-864402 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     64s (x8 over 64s)  kubelet          Node pause-864402 status is now: NodeHasSufficientPID
	  Normal  Starting                 53s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  52s                kubelet          Node pause-864402 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    52s                kubelet          Node pause-864402 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     52s                kubelet          Node pause-864402 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           40s                node-controller  Node pause-864402 event: Registered Node pause-864402 in Controller
	  Normal  NodeReady                26s                kubelet          Node pause-864402 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.001118] FS-Cache: O-key=[8] '1a733b0000000000'
	[  +0.000744] FS-Cache: N-cookie c=000000e4 [p=000000db fl=2 nc=0 na=1]
	[  +0.000993] FS-Cache: N-cookie d=00000000a817b499{9p.inode} n=00000000df96581b
	[  +0.001091] FS-Cache: N-key=[8] '1a733b0000000000'
	[  +0.003238] FS-Cache: Duplicate cookie detected
	[  +0.000706] FS-Cache: O-cookie c=000000de [p=000000db fl=226 nc=0 na=1]
	[  +0.001048] FS-Cache: O-cookie d=00000000a817b499{9p.inode} n=00000000a77d32c1
	[  +0.001108] FS-Cache: O-key=[8] '1a733b0000000000'
	[  +0.000748] FS-Cache: N-cookie c=000000e5 [p=000000db fl=2 nc=0 na=1]
	[  +0.000997] FS-Cache: N-cookie d=00000000a817b499{9p.inode} n=000000003ba586a5
	[  +0.001137] FS-Cache: N-key=[8] '1a733b0000000000'
	[  +2.731848] FS-Cache: Duplicate cookie detected
	[  +0.000728] FS-Cache: O-cookie c=000000dc [p=000000db fl=226 nc=0 na=1]
	[  +0.001029] FS-Cache: O-cookie d=00000000a817b499{9p.inode} n=00000000ee5383df
	[  +0.001099] FS-Cache: O-key=[8] '19733b0000000000'
	[  +0.000771] FS-Cache: N-cookie c=000000e7 [p=000000db fl=2 nc=0 na=1]
	[  +0.000987] FS-Cache: N-cookie d=00000000a817b499{9p.inode} n=00000000052ff0a6
	[  +0.001114] FS-Cache: N-key=[8] '19733b0000000000'
	[  +0.302039] FS-Cache: Duplicate cookie detected
	[  +0.000741] FS-Cache: O-cookie c=000000e1 [p=000000db fl=226 nc=0 na=1]
	[  +0.001026] FS-Cache: O-cookie d=00000000a817b499{9p.inode} n=000000001645a21b
	[  +0.001107] FS-Cache: O-key=[8] '1f733b0000000000'
	[  +0.000755] FS-Cache: N-cookie c=000000e8 [p=000000db fl=2 nc=0 na=1]
	[  +0.000972] FS-Cache: N-cookie d=00000000a817b499{9p.inode} n=00000000df96581b
	[  +0.001106] FS-Cache: N-key=[8] '1f733b0000000000'
	
	
	==> etcd [8d96fb8d9b0ca0dd8ff886b3a4e820fed4fa1288ac3598a06d3c7f8afe619e99] <==
	{"level":"info","ts":"2024-07-23T15:13:58.627408Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-23T15:13:58.651122Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-23T15:13:58.652018Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-23T15:13:58.652271Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-23T15:13:58.652354Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-23T15:13:58.652507Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2024-07-23T15:13:58.652542Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2024-07-23T15:13:58.661449Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2024-07-23T15:13:58.661575Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2024-07-23T15:13:58.66169Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-23T15:13:58.661759Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-23T15:14:00.366607Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-23T15:14:00.36674Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-23T15:14:00.366794Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2024-07-23T15:14:00.366836Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2024-07-23T15:14:00.366875Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2024-07-23T15:14:00.366914Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2024-07-23T15:14:00.36695Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2024-07-23T15:14:00.373545Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:pause-864402 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-23T15:14:00.373838Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-23T15:14:00.374191Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-23T15:14:00.374412Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-23T15:14:00.374471Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-23T15:14:00.376091Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2024-07-23T15:14:00.392114Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [b203059c3c1438dfd31a735bfcf31379d96d975eec700982ccca07d52f6dc740] <==
	{"level":"info","ts":"2024-07-23T15:13:06.218839Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2024-07-23T15:13:06.218875Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2024-07-23T15:13:06.21893Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2024-07-23T15:13:06.218966Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2024-07-23T15:13:06.226725Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:pause-864402 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-23T15:13:06.226838Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-23T15:13:06.227133Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-23T15:13:06.24823Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2024-07-23T15:13:06.248639Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-23T15:13:06.26888Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-23T15:13:06.269021Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-23T15:13:06.253317Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-23T15:13:06.25336Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-23T15:13:06.269287Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-23T15:13:06.277292Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-23T15:13:48.543118Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-23T15:13:48.545648Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"pause-864402","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"warn","ts":"2024-07-23T15:13:48.545798Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-23T15:13:48.550791Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-23T15:13:48.622045Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-23T15:13:48.622106Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-23T15:13:48.62218Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2024-07-23T15:13:48.623871Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2024-07-23T15:13:48.623988Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2024-07-23T15:13:48.624045Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"pause-864402","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> kernel <==
	 15:14:09 up 23:56,  0 users,  load average: 4.78, 2.54, 2.02
	Linux pause-864402 5.15.0-1065-aws #71~20.04.1-Ubuntu SMP Fri Jun 28 19:59:49 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [13ab1eaf0c3e0468ddfefff14c5090a6b45a71823d387f7c663baa763b43dbda] <==
	I0723 15:13:58.146617       1 main.go:178] kindnetd IP family: "ipv4"
	I0723 15:13:58.152684       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0723 15:13:58.628890       1 controller.go:334] Starting controller kube-network-policies
	I0723 15:13:58.650599       1 controller.go:338] Waiting for informer caches to sync
	I0723 15:13:58.650686       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	W0723 15:14:04.900189       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0723 15:14:04.900310       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	W0723 15:14:04.900393       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0723 15:14:04.900438       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	W0723 15:14:04.900491       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0723 15:14:04.900541       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	W0723 15:14:05.748988       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0723 15:14:05.749108       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	W0723 15:14:05.816616       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0723 15:14:05.816747       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	W0723 15:14:06.036463       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0723 15:14:06.036506       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	W0723 15:14:07.606290       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0723 15:14:07.606337       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	W0723 15:14:08.019721       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0723 15:14:08.019763       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	W0723 15:14:08.336069       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0723 15:14:08.336107       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0723 15:14:08.628888       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0723 15:14:08.628937       1 main.go:299] handling current node
	
	
	==> kindnet [56a202d711bd0cb7058cc944a7bc7f5e3da05738e3963343fd9ed69ce506f333] <==
	E0723 15:13:32.034635       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	W0723 15:13:32.977310       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0723 15:13:32.977346       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	W0723 15:13:33.576268       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0723 15:13:33.576409       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	W0723 15:13:33.629966       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0723 15:13:33.630102       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	W0723 15:13:35.583691       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0723 15:13:35.583811       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	W0723 15:13:36.336731       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0723 15:13:36.336765       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	W0723 15:13:36.682181       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0723 15:13:36.682314       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	W0723 15:13:38.793142       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0723 15:13:38.793485       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	W0723 15:13:39.652891       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0723 15:13:39.653012       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	W0723 15:13:41.176091       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0723 15:13:41.176126       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0723 15:13:42.021645       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0723 15:13:42.021799       1 main.go:299] handling current node
	W0723 15:13:46.401839       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0723 15:13:46.402085       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	W0723 15:13:47.933579       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0723 15:13:47.933633       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	
	
	==> kube-apiserver [99c9ceed64ad3d10f975b558f86293c29bbf182265232485f460e6b789992171] <==
	I0723 15:14:04.554774       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0723 15:14:04.554814       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0723 15:14:04.555154       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0723 15:14:04.555285       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0723 15:14:04.535962       1 aggregator.go:163] waiting for initial CRD sync...
	I0723 15:14:04.710006       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0723 15:14:04.747482       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0723 15:14:04.824373       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0723 15:14:04.824459       1 policy_source.go:224] refreshing policies
	I0723 15:14:04.847848       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0723 15:14:04.852292       1 aggregator.go:165] initial CRD sync complete...
	I0723 15:14:04.852444       1 autoregister_controller.go:141] Starting autoregister controller
	I0723 15:14:04.852516       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0723 15:14:04.852550       1 cache.go:39] Caches are synced for autoregister controller
	I0723 15:14:04.882547       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0723 15:14:04.912736       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0723 15:14:04.997199       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0723 15:14:04.997419       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0723 15:14:05.010876       1 shared_informer.go:320] Caches are synced for configmaps
	I0723 15:14:05.011048       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0723 15:14:05.011105       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0723 15:14:05.012231       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0723 15:14:05.029381       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	E0723 15:14:05.050445       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0723 15:14:05.553659       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	
	
	==> kube-apiserver [de67f38f84571a6ff8218d47a832510c1ddcb92ac9d51bfff7edb8e4d82ca409] <==
	I0723 15:13:16.108844       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0723 15:13:28.896727       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0723 15:13:29.239845       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0723 15:13:48.542259       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	W0723 15:13:48.556590       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 15:13:48.556655       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 15:13:48.556705       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 15:13:48.556748       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 15:13:48.556785       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 15:13:48.556831       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 15:13:48.556871       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 15:13:48.556909       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 15:13:48.556942       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 15:13:48.556974       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 15:13:48.557470       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 15:13:48.557520       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 15:13:48.557570       1 logging.go:59] [core] [Channel #10 SubChannel #11] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 15:13:48.564470       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 15:13:48.564566       1 logging.go:59] [core] [Channel #178 SubChannel #179] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 15:13:48.564625       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 15:13:48.564677       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 15:13:48.564850       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 15:13:48.564910       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 15:13:48.564977       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 15:13:48.566030       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [ea06788641402edebc224a5a98dec557723cdf365b7fcd0c05b20e74207c02ed] <==
	I0723 15:13:28.439740       1 shared_informer.go:320] Caches are synced for expand
	I0723 15:13:28.449300       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0723 15:13:28.466888       1 shared_informer.go:320] Caches are synced for PVC protection
	I0723 15:13:28.475033       1 shared_informer.go:320] Caches are synced for stateful set
	I0723 15:13:28.490578       1 shared_informer.go:320] Caches are synced for persistent volume
	I0723 15:13:28.496895       1 shared_informer.go:320] Caches are synced for resource quota
	I0723 15:13:28.517424       1 shared_informer.go:320] Caches are synced for ephemeral
	I0723 15:13:28.520897       1 shared_informer.go:320] Caches are synced for attach detach
	I0723 15:13:28.545594       1 shared_informer.go:320] Caches are synced for resource quota
	I0723 15:13:29.004366       1 shared_informer.go:320] Caches are synced for garbage collector
	I0723 15:13:29.004456       1 shared_informer.go:320] Caches are synced for garbage collector
	I0723 15:13:29.004468       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0723 15:13:29.465782       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="221.207829ms"
	I0723 15:13:29.479679       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="13.843366ms"
	I0723 15:13:29.479759       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="44.537µs"
	I0723 15:13:42.161301       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="97.724µs"
	I0723 15:13:42.174250       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="51.069µs"
	I0723 15:13:42.190419       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="117.646µs"
	I0723 15:13:42.209146       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="97.969µs"
	I0723 15:13:43.370747       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0723 15:13:45.163856       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="176.034µs"
	I0723 15:13:45.257032       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="42.724402ms"
	I0723 15:13:45.259453       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="123.849µs"
	I0723 15:13:45.309986       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="14.398457ms"
	I0723 15:13:45.310236       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="62.688µs"
	
	
	==> kube-controller-manager [f69c46d26d552fcd64207b6dd16a65aa163aec359db25d9f540d40ebcf724a21] <==
	I0723 15:14:06.899444       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0723 15:14:06.899500       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0723 15:14:06.905938       1 shared_informer.go:320] Caches are synced for tokens
	I0723 15:14:06.910675       1 controllermanager.go:761] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0723 15:14:06.911166       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0723 15:14:06.912753       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0723 15:14:06.916012       1 controllermanager.go:761] "Started controller" controller="ephemeral-volume-controller"
	I0723 15:14:06.916556       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0723 15:14:06.916610       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0723 15:14:06.920830       1 controllermanager.go:761] "Started controller" controller="endpointslice-controller"
	I0723 15:14:06.921136       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0723 15:14:06.921287       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0723 15:14:06.925478       1 controllermanager.go:761] "Started controller" controller="cronjob-controller"
	I0723 15:14:06.926127       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0723 15:14:06.926195       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0723 15:14:06.929714       1 controllermanager.go:761] "Started controller" controller="ttl-controller"
	I0723 15:14:06.929842       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0723 15:14:06.929884       1 controllermanager.go:739] "Warning: skipping controller" controller="node-route-controller"
	I0723 15:14:06.929817       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0723 15:14:06.930631       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0723 15:14:06.936831       1 controllermanager.go:761] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0723 15:14:06.937006       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0723 15:14:06.937211       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0723 15:14:06.940525       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0723 15:14:06.941291       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	
	
	==> kube-proxy [0de9dcbc9d41bf6080842f82238b88f2e50689901abb8b333766b2dfa0e63a55] <==
	I0723 15:13:29.986786       1 server_linux.go:69] "Using iptables proxy"
	I0723 15:13:30.002324       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.76.2"]
	I0723 15:13:30.044342       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0723 15:13:30.044485       1 server_linux.go:165] "Using iptables Proxier"
	I0723 15:13:30.047603       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0723 15:13:30.047635       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0723 15:13:30.047666       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0723 15:13:30.047919       1 server.go:872] "Version info" version="v1.30.3"
	I0723 15:13:30.047945       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0723 15:13:30.049890       1 config.go:192] "Starting service config controller"
	I0723 15:13:30.050137       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0723 15:13:30.050472       1 config.go:319] "Starting node config controller"
	I0723 15:13:30.052198       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0723 15:13:30.052428       1 config.go:101] "Starting endpoint slice config controller"
	I0723 15:13:30.053039       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0723 15:13:30.150910       1 shared_informer.go:320] Caches are synced for service config
	I0723 15:13:30.154084       1 shared_informer.go:320] Caches are synced for node config
	I0723 15:13:30.154111       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [e092a00049200538fdd8d04a30c9fe7e039a5edac85687675878c085dc5d14a0] <==
	I0723 15:14:03.656702       1 server_linux.go:69] "Using iptables proxy"
	I0723 15:14:04.925344       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.76.2"]
	I0723 15:14:05.078062       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0723 15:14:05.078120       1 server_linux.go:165] "Using iptables Proxier"
	I0723 15:14:05.079690       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0723 15:14:05.079715       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0723 15:14:05.079743       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0723 15:14:05.079954       1 server.go:872] "Version info" version="v1.30.3"
	I0723 15:14:05.079965       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0723 15:14:05.081108       1 config.go:192] "Starting service config controller"
	I0723 15:14:05.081132       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0723 15:14:05.081166       1 config.go:101] "Starting endpoint slice config controller"
	I0723 15:14:05.081176       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0723 15:14:05.082134       1 config.go:319] "Starting node config controller"
	I0723 15:14:05.082154       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0723 15:14:05.182166       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0723 15:14:05.182240       1 shared_informer.go:320] Caches are synced for node config
	I0723 15:14:05.182169       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [5bb451f2e0e162d68efcb7265c7e11da69737651fdf37192d43ee0611a4c436b] <==
	I0723 15:14:02.304098       1 serving.go:380] Generated self-signed cert in-memory
	W0723 15:14:04.782882       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0723 15:14:04.782978       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0723 15:14:04.783013       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0723 15:14:04.783044       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0723 15:14:04.901473       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0723 15:14:04.902736       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0723 15:14:04.907250       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0723 15:14:04.910104       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0723 15:14:04.922958       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0723 15:14:04.910128       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0723 15:14:05.023553       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [64765c244fad2609e102831ecd2f0bc2856d178e82298ec0f01ed7d1ea631562] <==
	W0723 15:13:13.976208       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0723 15:13:13.976260       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0723 15:13:14.011473       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0723 15:13:14.011604       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0723 15:13:14.021010       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0723 15:13:14.021130       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0723 15:13:14.133670       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0723 15:13:14.133781       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0723 15:13:14.285130       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0723 15:13:14.285260       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0723 15:13:14.321619       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0723 15:13:14.321751       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0723 15:13:14.333812       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0723 15:13:14.333937       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0723 15:13:14.383375       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0723 15:13:14.383507       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0723 15:13:14.491159       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0723 15:13:14.491345       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0723 15:13:14.491231       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0723 15:13:14.491442       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0723 15:13:14.728481       1 reflector.go:547] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0723 15:13:14.728635       1 reflector.go:150] runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0723 15:13:17.547800       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0723 15:13:48.539584       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0723 15:13:48.539693       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 23 15:13:58 pause-864402 kubelet[1590]: I0723 15:13:58.285582    1590 status_manager.go:853] "Failed to get status for pod" podUID="e20d03de-bb10-43db-ae8b-154cad292ccd" pod="kube-system/kindnet-tzchc" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kindnet-tzchc\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 23 15:13:58 pause-864402 kubelet[1590]: I0723 15:13:58.285847    1590 status_manager.go:853] "Failed to get status for pod" podUID="814d4c9e-fdfa-45cf-a6d0-2bdfd7e172f4" pod="kube-system/coredns-7db6d8ff4d-8rdhr" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8rdhr\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 23 15:13:58 pause-864402 kubelet[1590]: I0723 15:13:58.286064    1590 status_manager.go:853] "Failed to get status for pod" podUID="a7196210-77e4-4a04-ace3-2a2f4ffca408" pod="kube-system/coredns-7db6d8ff4d-cw9s5" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-cw9s5\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 23 15:13:58 pause-864402 kubelet[1590]: I0723 15:13:58.286256    1590 status_manager.go:853] "Failed to get status for pod" podUID="92b5fbd752cb11389a0e6c5cfdad3f14" pod="kube-system/kube-scheduler-pause-864402" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-864402\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 23 15:13:58 pause-864402 kubelet[1590]: I0723 15:13:58.286454    1590 status_manager.go:853] "Failed to get status for pod" podUID="9e4724bd4603eae1502167ee3056854a" pod="kube-system/etcd-pause-864402" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/etcd-pause-864402\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 23 15:13:58 pause-864402 kubelet[1590]: I0723 15:13:58.286657    1590 status_manager.go:853] "Failed to get status for pod" podUID="cb8d092b9aeb2ec0ae14ddf2e642ed10" pod="kube-system/kube-controller-manager-pause-864402" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-864402\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 23 15:13:58 pause-864402 kubelet[1590]: I0723 15:13:58.286880    1590 status_manager.go:853] "Failed to get status for pod" podUID="24283b91b44a43cf7fec0a766c7718cd" pod="kube-system/kube-apiserver-pause-864402" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-864402\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 23 15:13:58 pause-864402 kubelet[1590]: I0723 15:13:58.288409    1590 status_manager.go:853] "Failed to get status for pod" podUID="814d4c9e-fdfa-45cf-a6d0-2bdfd7e172f4" pod="kube-system/coredns-7db6d8ff4d-8rdhr" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8rdhr\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 23 15:13:58 pause-864402 kubelet[1590]: I0723 15:13:58.288643    1590 status_manager.go:853] "Failed to get status for pod" podUID="a7196210-77e4-4a04-ace3-2a2f4ffca408" pod="kube-system/coredns-7db6d8ff4d-cw9s5" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-cw9s5\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 23 15:13:58 pause-864402 kubelet[1590]: I0723 15:13:58.288879    1590 status_manager.go:853] "Failed to get status for pod" podUID="92b5fbd752cb11389a0e6c5cfdad3f14" pod="kube-system/kube-scheduler-pause-864402" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-864402\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 23 15:13:58 pause-864402 kubelet[1590]: I0723 15:13:58.289249    1590 status_manager.go:853] "Failed to get status for pod" podUID="9e4724bd4603eae1502167ee3056854a" pod="kube-system/etcd-pause-864402" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/etcd-pause-864402\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 23 15:13:58 pause-864402 kubelet[1590]: I0723 15:13:58.289567    1590 status_manager.go:853] "Failed to get status for pod" podUID="cb8d092b9aeb2ec0ae14ddf2e642ed10" pod="kube-system/kube-controller-manager-pause-864402" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-864402\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 23 15:13:58 pause-864402 kubelet[1590]: I0723 15:13:58.289765    1590 status_manager.go:853] "Failed to get status for pod" podUID="24283b91b44a43cf7fec0a766c7718cd" pod="kube-system/kube-apiserver-pause-864402" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-864402\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 23 15:13:58 pause-864402 kubelet[1590]: I0723 15:13:58.289950    1590 status_manager.go:853] "Failed to get status for pod" podUID="d15badef-bb4d-428a-9402-5dd73f507db1" pod="kube-system/kube-proxy-vjl8m" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vjl8m\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 23 15:13:58 pause-864402 kubelet[1590]: I0723 15:13:58.290129    1590 status_manager.go:853] "Failed to get status for pod" podUID="e20d03de-bb10-43db-ae8b-154cad292ccd" pod="kube-system/kindnet-tzchc" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kindnet-tzchc\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 23 15:13:58 pause-864402 kubelet[1590]: I0723 15:13:58.292965    1590 status_manager.go:853] "Failed to get status for pod" podUID="814d4c9e-fdfa-45cf-a6d0-2bdfd7e172f4" pod="kube-system/coredns-7db6d8ff4d-8rdhr" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8rdhr\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 23 15:13:58 pause-864402 kubelet[1590]: I0723 15:13:58.293197    1590 status_manager.go:853] "Failed to get status for pod" podUID="a7196210-77e4-4a04-ace3-2a2f4ffca408" pod="kube-system/coredns-7db6d8ff4d-cw9s5" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-cw9s5\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 23 15:13:58 pause-864402 kubelet[1590]: I0723 15:13:58.293493    1590 status_manager.go:853] "Failed to get status for pod" podUID="92b5fbd752cb11389a0e6c5cfdad3f14" pod="kube-system/kube-scheduler-pause-864402" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-864402\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 23 15:13:58 pause-864402 kubelet[1590]: I0723 15:13:58.293781    1590 status_manager.go:853] "Failed to get status for pod" podUID="9e4724bd4603eae1502167ee3056854a" pod="kube-system/etcd-pause-864402" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/etcd-pause-864402\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 23 15:13:58 pause-864402 kubelet[1590]: I0723 15:13:58.293995    1590 status_manager.go:853] "Failed to get status for pod" podUID="cb8d092b9aeb2ec0ae14ddf2e642ed10" pod="kube-system/kube-controller-manager-pause-864402" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-864402\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 23 15:13:58 pause-864402 kubelet[1590]: I0723 15:13:58.294183    1590 status_manager.go:853] "Failed to get status for pod" podUID="24283b91b44a43cf7fec0a766c7718cd" pod="kube-system/kube-apiserver-pause-864402" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-864402\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 23 15:13:58 pause-864402 kubelet[1590]: I0723 15:13:58.294454    1590 status_manager.go:853] "Failed to get status for pod" podUID="d15badef-bb4d-428a-9402-5dd73f507db1" pod="kube-system/kube-proxy-vjl8m" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vjl8m\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 23 15:13:58 pause-864402 kubelet[1590]: I0723 15:13:58.294914    1590 status_manager.go:853] "Failed to get status for pod" podUID="e20d03de-bb10-43db-ae8b-154cad292ccd" pod="kube-system/kindnet-tzchc" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kindnet-tzchc\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 23 15:14:06 pause-864402 kubelet[1590]: W0723 15:14:06.159408    1590 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Jul 23 15:14:06 pause-864402 kubelet[1590]: W0723 15:14:06.160566    1590 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0723 15:14:07.434604 3474742 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19319-3317687/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-864402 -n pause-864402
helpers_test.go:261: (dbg) Run:  kubectl --context pause-864402 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-864402
helpers_test.go:235: (dbg) docker inspect pause-864402:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8f70c9e72dce33a97e6652883611bbf9eba987d3f9946750ddd5276690d2a98b",
	        "Created": "2024-07-23T15:12:42.62184285Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 3464571,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-07-23T15:12:42.81302532Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:71a7ac3dcc1f66f9b927c200bbaca5de093c77584a8e2cceb20f7c37b7028780",
	        "ResolvConfPath": "/var/lib/docker/containers/8f70c9e72dce33a97e6652883611bbf9eba987d3f9946750ddd5276690d2a98b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8f70c9e72dce33a97e6652883611bbf9eba987d3f9946750ddd5276690d2a98b/hostname",
	        "HostsPath": "/var/lib/docker/containers/8f70c9e72dce33a97e6652883611bbf9eba987d3f9946750ddd5276690d2a98b/hosts",
	        "LogPath": "/var/lib/docker/containers/8f70c9e72dce33a97e6652883611bbf9eba987d3f9946750ddd5276690d2a98b/8f70c9e72dce33a97e6652883611bbf9eba987d3f9946750ddd5276690d2a98b-json.log",
	        "Name": "/pause-864402",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-864402:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-864402",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c0268c37d37deb8ea02c87254d4984947bd4a6d2871fecfbfebcb475f25a7fb2-init/diff:/var/lib/docker/overlay2/cc3f8b49bb50b989dafe94ead705091dcc80edbdd409e161d5028bc93b57b742/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c0268c37d37deb8ea02c87254d4984947bd4a6d2871fecfbfebcb475f25a7fb2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c0268c37d37deb8ea02c87254d4984947bd4a6d2871fecfbfebcb475f25a7fb2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c0268c37d37deb8ea02c87254d4984947bd4a6d2871fecfbfebcb475f25a7fb2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-864402",
	                "Source": "/var/lib/docker/volumes/pause-864402/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-864402",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-864402",
	                "name.minikube.sigs.k8s.io": "pause-864402",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9b3305b54290d3b476e2f31e6071b9b245d67f794d28755e8290893bdd3f3af1",
	            "SandboxKey": "/var/run/docker/netns/9b3305b54290",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "37362"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "37363"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "37366"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "37364"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "37365"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-864402": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null,
	                    "NetworkID": "ed2707062fcb8c2560a93f07dbc68eece32d47deb7abc78e42ea1d714fb25a36",
	                    "EndpointID": "0af19ba7e6b67c3b763faa7368f687d1e147a286be83c635fd6423c6fa68e35b",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-864402",
	                        "8f70c9e72dce"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-864402 -n pause-864402
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p pause-864402 logs -n 25
E0723 15:14:13.072912 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p pause-864402 logs -n 25: (2.20299674s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p scheduled-stop-045914       | scheduled-stop-045914       | jenkins | v1.33.1 | 23 Jul 24 15:11 UTC |                     |
	|         | --schedule 5m                  |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-045914       | scheduled-stop-045914       | jenkins | v1.33.1 | 23 Jul 24 15:11 UTC |                     |
	|         | --schedule 5m                  |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-045914       | scheduled-stop-045914       | jenkins | v1.33.1 | 23 Jul 24 15:11 UTC |                     |
	|         | --schedule 15s                 |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-045914       | scheduled-stop-045914       | jenkins | v1.33.1 | 23 Jul 24 15:11 UTC |                     |
	|         | --schedule 15s                 |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-045914       | scheduled-stop-045914       | jenkins | v1.33.1 | 23 Jul 24 15:11 UTC |                     |
	|         | --schedule 15s                 |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-045914       | scheduled-stop-045914       | jenkins | v1.33.1 | 23 Jul 24 15:11 UTC | 23 Jul 24 15:11 UTC |
	|         | --cancel-scheduled             |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-045914       | scheduled-stop-045914       | jenkins | v1.33.1 | 23 Jul 24 15:11 UTC |                     |
	|         | --schedule 15s                 |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-045914       | scheduled-stop-045914       | jenkins | v1.33.1 | 23 Jul 24 15:11 UTC |                     |
	|         | --schedule 15s                 |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-045914       | scheduled-stop-045914       | jenkins | v1.33.1 | 23 Jul 24 15:11 UTC | 23 Jul 24 15:11 UTC |
	|         | --schedule 15s                 |                             |         |         |                     |                     |
	| delete  | -p scheduled-stop-045914       | scheduled-stop-045914       | jenkins | v1.33.1 | 23 Jul 24 15:12 UTC | 23 Jul 24 15:12 UTC |
	| start   | -p insufficient-storage-028797 | insufficient-storage-028797 | jenkins | v1.33.1 | 23 Jul 24 15:12 UTC |                     |
	|         | --memory=2048 --output=json    |                             |         |         |                     |                     |
	|         | --wait=true --driver=docker    |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| delete  | -p insufficient-storage-028797 | insufficient-storage-028797 | jenkins | v1.33.1 | 23 Jul 24 15:12 UTC | 23 Jul 24 15:12 UTC |
	| start   | -p pause-864402 --memory=2048  | pause-864402                | jenkins | v1.33.1 | 23 Jul 24 15:12 UTC | 23 Jul 24 15:13 UTC |
	|         | --install-addons=false         |                             |         |         |                     |                     |
	|         | --wait=all --driver=docker     |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| start   | -p NoKubernetes-231608         | NoKubernetes-231608         | jenkins | v1.33.1 | 23 Jul 24 15:12 UTC |                     |
	|         | --no-kubernetes                |                             |         |         |                     |                     |
	|         | --kubernetes-version=1.20      |                             |         |         |                     |                     |
	|         | --driver=docker                |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| start   | -p NoKubernetes-231608         | NoKubernetes-231608         | jenkins | v1.33.1 | 23 Jul 24 15:12 UTC | 23 Jul 24 15:13 UTC |
	|         | --driver=docker                |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| start   | -p NoKubernetes-231608         | NoKubernetes-231608         | jenkins | v1.33.1 | 23 Jul 24 15:13 UTC | 23 Jul 24 15:13 UTC |
	|         | --no-kubernetes                |                             |         |         |                     |                     |
	|         | --driver=docker                |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| delete  | -p NoKubernetes-231608         | NoKubernetes-231608         | jenkins | v1.33.1 | 23 Jul 24 15:13 UTC | 23 Jul 24 15:13 UTC |
	| start   | -p NoKubernetes-231608         | NoKubernetes-231608         | jenkins | v1.33.1 | 23 Jul 24 15:13 UTC | 23 Jul 24 15:13 UTC |
	|         | --no-kubernetes                |                             |         |         |                     |                     |
	|         | --driver=docker                |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| ssh     | -p NoKubernetes-231608 sudo    | NoKubernetes-231608         | jenkins | v1.33.1 | 23 Jul 24 15:13 UTC |                     |
	|         | systemctl is-active --quiet    |                             |         |         |                     |                     |
	|         | service kubelet                |                             |         |         |                     |                     |
	| stop    | -p NoKubernetes-231608         | NoKubernetes-231608         | jenkins | v1.33.1 | 23 Jul 24 15:13 UTC | 23 Jul 24 15:13 UTC |
	| start   | -p NoKubernetes-231608         | NoKubernetes-231608         | jenkins | v1.33.1 | 23 Jul 24 15:13 UTC | 23 Jul 24 15:13 UTC |
	|         | --driver=docker                |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| start   | -p pause-864402                | pause-864402                | jenkins | v1.33.1 | 23 Jul 24 15:13 UTC | 23 Jul 24 15:14 UTC |
	|         | --alsologtostderr              |                             |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| ssh     | -p NoKubernetes-231608 sudo    | NoKubernetes-231608         | jenkins | v1.33.1 | 23 Jul 24 15:13 UTC |                     |
	|         | systemctl is-active --quiet    |                             |         |         |                     |                     |
	|         | service kubelet                |                             |         |         |                     |                     |
	| delete  | -p NoKubernetes-231608         | NoKubernetes-231608         | jenkins | v1.33.1 | 23 Jul 24 15:13 UTC | 23 Jul 24 15:13 UTC |
	| start   | -p missing-upgrade-018960      | minikube                    | jenkins | v1.26.0 | 23 Jul 24 15:13 UTC |                     |
	|         | --memory=2200 --driver=docker  |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	|---------|--------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/23 15:13:55
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.18.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0723 15:13:55.151570 3473787 out.go:296] Setting OutFile to fd 1 ...
	I0723 15:13:55.151679 3473787 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0723 15:13:55.151683 3473787 out.go:309] Setting ErrFile to fd 2...
	I0723 15:13:55.151687 3473787 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0723 15:13:55.151922 3473787 root.go:329] Updating PATH: /home/jenkins/minikube-integration/19319-3317687/.minikube/bin
	I0723 15:13:55.152206 3473787 out.go:303] Setting JSON to false
	I0723 15:13:55.153109 3473787 start.go:115] hostinfo: {"hostname":"ip-172-31-21-244","uptime":86182,"bootTime":1721661454,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0723 15:13:55.153175 3473787 start.go:125] virtualization:  
	I0723 15:13:55.156792 3473787 out.go:177] * [missing-upgrade-018960] minikube v1.26.0 on Ubuntu 20.04 (arm64)
	I0723 15:13:55.159558 3473787 out.go:177]   - MINIKUBE_LOCATION=19319
	I0723 15:13:55.159609 3473787 notify.go:193] Checking for updates...
	I0723 15:13:55.164791 3473787 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0723 15:13:55.167372 3473787 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19319-3317687/kubeconfig
	I0723 15:13:55.169958 3473787 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19319-3317687/.minikube
	I0723 15:13:55.172686 3473787 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0723 15:13:55.175244 3473787 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0723 15:13:55.178281 3473787 config.go:178] Loaded profile config "pause-864402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 15:13:55.178327 3473787 driver.go:360] Setting default libvirt URI to qemu:///system
	I0723 15:13:55.218013 3473787 docker.go:137] docker version: linux-27.1.0
	I0723 15:13:55.218100 3473787 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0723 15:13:55.234969 3473787 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-3317687/.minikube/last_update_check: {Name:mk486b32d34537fa2821a17e03a096a80d26a8e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:13:55.238058 3473787 out.go:177] * minikube 1.33.1 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.33.1
	I0723 15:13:55.240642 3473787 out.go:177] * To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	I0723 15:13:55.299213 3473787 info.go:265] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:53 SystemTime:2024-07-23 15:13:55.287882076 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1065-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.0]] Warnings:<nil>}}
	I0723 15:13:55.299304 3473787 docker.go:254] overlay module found
	I0723 15:13:55.302392 3473787 out.go:177] * Using the docker driver based on user configuration
	I0723 15:13:55.305714 3473787 start.go:284] selected driver: docker
	I0723 15:13:55.305734 3473787 start.go:805] validating driver "docker" against <nil>
	I0723 15:13:55.305754 3473787 start.go:816] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0723 15:13:55.306345 3473787 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0723 15:13:55.398990 3473787 info.go:265] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:53 SystemTime:2024-07-23 15:13:55.388006739 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1065-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.0]] Warnings:<nil>}}
	I0723 15:13:55.399104 3473787 start_flags.go:296] no existing cluster config was found, will generate one from the flags 
	I0723 15:13:55.399273 3473787 start_flags.go:835] Wait components to verify : map[apiserver:true system_pods:true]
	I0723 15:13:55.402368 3473787 out.go:177] * Using Docker driver with root privileges
	I0723 15:13:55.404986 3473787 cni.go:95] Creating CNI manager for ""
	I0723 15:13:55.404999 3473787 cni.go:162] "docker" driver + crio runtime found, recommending kindnet
	I0723 15:13:55.405008 3473787 start_flags.go:305] Found "CNI" CNI - setting NetworkPlugin=cni
	I0723 15:13:55.405017 3473787 start_flags.go:310] config:
	{Name:missing-upgrade-018960 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:missing-upgrade-018960 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0723 15:13:55.407936 3473787 out.go:177] * Starting control plane node missing-upgrade-018960 in cluster missing-upgrade-018960
	I0723 15:13:55.411587 3473787 cache.go:120] Beginning downloading kic base image for docker with crio
	I0723 15:13:55.414192 3473787 out.go:177] * Pulling base image ...
	I0723 15:13:55.416661 3473787 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime crio
	I0723 15:13:55.416821 3473787 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 in local docker daemon
	I0723 15:13:55.433054 3473787 cache.go:147] Downloading gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 to local cache
	I0723 15:13:55.433267 3473787 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 in local cache directory
	I0723 15:13:55.433828 3473787 image.go:119] Writing gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 to local cache
	I0723 15:13:55.475352 3473787 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.1/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-arm64.tar.lz4
	I0723 15:13:55.475365 3473787 cache.go:57] Caching tarball of preloaded images
	I0723 15:13:55.475528 3473787 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime crio
	I0723 15:13:55.478298 3473787 out.go:177] * Downloading Kubernetes v1.24.1 preload ...
	I0723 15:13:53.773779 3472620 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0723 15:13:53.773804 3472620 machine.go:97] duration metric: took 6.410154461s to provisionDockerMachine
	I0723 15:13:53.773817 3472620 start.go:293] postStartSetup for "pause-864402" (driver="docker")
	I0723 15:13:53.773833 3472620 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0723 15:13:53.773906 3472620 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0723 15:13:53.773973 3472620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-864402
	I0723 15:13:53.791890 3472620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37362 SSHKeyPath:/home/jenkins/minikube-integration/19319-3317687/.minikube/machines/pause-864402/id_rsa Username:docker}
	I0723 15:13:53.883547 3472620 ssh_runner.go:195] Run: cat /etc/os-release
	I0723 15:13:53.887349 3472620 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0723 15:13:53.887384 3472620 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0723 15:13:53.887395 3472620 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0723 15:13:53.887402 3472620 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0723 15:13:53.887412 3472620 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-3317687/.minikube/addons for local assets ...
	I0723 15:13:53.887470 3472620 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-3317687/.minikube/files for local assets ...
	I0723 15:13:53.887553 3472620 filesync.go:149] local asset: /home/jenkins/minikube-integration/19319-3317687/.minikube/files/etc/ssl/certs/33230802.pem -> 33230802.pem in /etc/ssl/certs
	I0723 15:13:53.887659 3472620 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0723 15:13:53.896377 3472620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3317687/.minikube/files/etc/ssl/certs/33230802.pem --> /etc/ssl/certs/33230802.pem (1708 bytes)
	I0723 15:13:53.921262 3472620 start.go:296] duration metric: took 147.429547ms for postStartSetup
	I0723 15:13:53.921344 3472620 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0723 15:13:53.921398 3472620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-864402
	I0723 15:13:53.937806 3472620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37362 SSHKeyPath:/home/jenkins/minikube-integration/19319-3317687/.minikube/machines/pause-864402/id_rsa Username:docker}
	I0723 15:13:54.031674 3472620 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0723 15:13:54.037551 3472620 fix.go:56] duration metric: took 6.696172806s for fixHost
	I0723 15:13:54.037590 3472620 start.go:83] releasing machines lock for "pause-864402", held for 6.696234353s
	I0723 15:13:54.037667 3472620 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-864402
	I0723 15:13:54.078382 3472620 ssh_runner.go:195] Run: cat /version.json
	I0723 15:13:54.078440 3472620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-864402
	I0723 15:13:54.078800 3472620 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0723 15:13:54.078868 3472620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-864402
	I0723 15:13:54.104406 3472620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37362 SSHKeyPath:/home/jenkins/minikube-integration/19319-3317687/.minikube/machines/pause-864402/id_rsa Username:docker}
	I0723 15:13:54.113336 3472620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37362 SSHKeyPath:/home/jenkins/minikube-integration/19319-3317687/.minikube/machines/pause-864402/id_rsa Username:docker}
	I0723 15:13:54.206580 3472620 ssh_runner.go:195] Run: systemctl --version
	I0723 15:13:54.350847 3472620 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0723 15:13:54.508782 3472620 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0723 15:13:54.513127 3472620 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0723 15:13:54.522226 3472620 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0723 15:13:54.522312 3472620 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0723 15:13:54.531250 3472620 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0723 15:13:54.531271 3472620 start.go:495] detecting cgroup driver to use...
	I0723 15:13:54.531304 3472620 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0723 15:13:54.531351 3472620 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0723 15:13:54.543727 3472620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0723 15:13:54.555796 3472620 docker.go:217] disabling cri-docker service (if available) ...
	I0723 15:13:54.555860 3472620 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0723 15:13:54.569340 3472620 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0723 15:13:54.581294 3472620 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0723 15:13:54.753133 3472620 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0723 15:13:54.907097 3472620 docker.go:233] disabling docker service ...
	I0723 15:13:54.907179 3472620 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0723 15:13:54.922924 3472620 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0723 15:13:54.935806 3472620 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0723 15:13:55.096113 3472620 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0723 15:13:55.252145 3472620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0723 15:13:55.268649 3472620 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0723 15:13:55.291908 3472620 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0723 15:13:55.291969 3472620 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:13:55.303238 3472620 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0723 15:13:55.303312 3472620 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:13:55.315210 3472620 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:13:55.326089 3472620 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:13:55.339014 3472620 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0723 15:13:55.348606 3472620 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:13:55.358155 3472620 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:13:55.371379 3472620 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0723 15:13:55.385859 3472620 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0723 15:13:55.400832 3472620 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0723 15:13:55.410072 3472620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:13:55.539371 3472620 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0723 15:13:56.223641 3472620 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0723 15:13:56.223785 3472620 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0723 15:13:56.233648 3472620 start.go:563] Will wait 60s for crictl version
	I0723 15:13:56.233806 3472620 ssh_runner.go:195] Run: which crictl
	I0723 15:13:56.243168 3472620 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0723 15:13:56.338106 3472620 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0723 15:13:56.338218 3472620 ssh_runner.go:195] Run: crio --version
	I0723 15:13:56.400339 3472620 ssh_runner.go:195] Run: crio --version
	I0723 15:13:56.453239 3472620 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.24.6 ...
	I0723 15:13:56.455003 3472620 cli_runner.go:164] Run: docker network inspect pause-864402 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0723 15:13:56.470092 3472620 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0723 15:13:56.473824 3472620 kubeadm.go:883] updating cluster {Name:pause-864402 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:pause-864402 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry
-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0723 15:13:56.473979 3472620 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0723 15:13:56.474051 3472620 ssh_runner.go:195] Run: sudo crictl images --output json
	I0723 15:13:56.527299 3472620 crio.go:514] all images are preloaded for cri-o runtime.
	I0723 15:13:56.527324 3472620 crio.go:433] Images already preloaded, skipping extraction
	I0723 15:13:56.527379 3472620 ssh_runner.go:195] Run: sudo crictl images --output json
	I0723 15:13:56.587248 3472620 crio.go:514] all images are preloaded for cri-o runtime.
	I0723 15:13:56.587276 3472620 cache_images.go:84] Images are preloaded, skipping loading
	I0723 15:13:56.587285 3472620 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.30.3 crio true true} ...
	I0723 15:13:56.587404 3472620 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-864402 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:pause-864402 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0723 15:13:56.587491 3472620 ssh_runner.go:195] Run: crio config
	I0723 15:13:56.681898 3472620 cni.go:84] Creating CNI manager for ""
	I0723 15:13:56.681958 3472620 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0723 15:13:56.681982 3472620 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0723 15:13:56.682017 3472620 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-864402 NodeName:pause-864402 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0723 15:13:56.682179 3472620 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-864402"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0723 15:13:56.682265 3472620 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0723 15:13:56.691778 3472620 binaries.go:44] Found k8s binaries, skipping transfer
	I0723 15:13:56.691891 3472620 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0723 15:13:56.700710 3472620 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I0723 15:13:56.719151 3472620 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0723 15:13:56.737121 3472620 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0723 15:13:56.754845 3472620 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0723 15:13:56.758824 3472620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0723 15:13:56.949133 3472620 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0723 15:13:56.977082 3472620 certs.go:68] Setting up /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/pause-864402 for IP: 192.168.76.2
	I0723 15:13:56.977101 3472620 certs.go:194] generating shared ca certs ...
	I0723 15:13:56.977117 3472620 certs.go:226] acquiring lock for ca certs: {Name:mk9061483da1430ff0fd8e32bc77025286e53111 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0723 15:13:56.977248 3472620 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19319-3317687/.minikube/ca.key
	I0723 15:13:56.977289 3472620 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19319-3317687/.minikube/proxy-client-ca.key
	I0723 15:13:56.977297 3472620 certs.go:256] generating profile certs ...
	I0723 15:13:56.977394 3472620 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/pause-864402/client.key
	I0723 15:13:56.977462 3472620 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/pause-864402/apiserver.key.19941d83
	I0723 15:13:56.977504 3472620 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/pause-864402/proxy-client.key
	I0723 15:13:56.977611 3472620 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-3317687/.minikube/certs/3323080.pem (1338 bytes)
	W0723 15:13:56.977637 3472620 certs.go:480] ignoring /home/jenkins/minikube-integration/19319-3317687/.minikube/certs/3323080_empty.pem, impossibly tiny 0 bytes
	I0723 15:13:56.977645 3472620 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-3317687/.minikube/certs/ca-key.pem (1679 bytes)
	I0723 15:13:56.977669 3472620 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-3317687/.minikube/certs/ca.pem (1082 bytes)
	I0723 15:13:56.977697 3472620 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-3317687/.minikube/certs/cert.pem (1123 bytes)
	I0723 15:13:56.977718 3472620 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-3317687/.minikube/certs/key.pem (1679 bytes)
	I0723 15:13:56.977759 3472620 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-3317687/.minikube/files/etc/ssl/certs/33230802.pem (1708 bytes)
	I0723 15:13:56.978384 3472620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3317687/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0723 15:13:57.028558 3472620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3317687/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0723 15:13:57.070000 3472620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3317687/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0723 15:13:57.103231 3472620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3317687/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0723 15:13:57.132732 3472620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/pause-864402/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0723 15:13:57.161229 3472620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/pause-864402/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0723 15:13:57.220308 3472620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/pause-864402/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0723 15:13:57.319777 3472620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/pause-864402/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0723 15:13:57.569484 3472620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3317687/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0723 15:13:57.632265 3472620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3317687/.minikube/certs/3323080.pem --> /usr/share/ca-certificates/3323080.pem (1338 bytes)
	I0723 15:13:57.720923 3472620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3317687/.minikube/files/etc/ssl/certs/33230802.pem --> /usr/share/ca-certificates/33230802.pem (1708 bytes)
	I0723 15:13:57.937972 3472620 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0723 15:13:57.989494 3472620 ssh_runner.go:195] Run: openssl version
	I0723 15:13:58.073449 3472620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0723 15:13:58.159897 3472620 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:13:58.190699 3472620 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 23 14:27 /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:13:58.190766 3472620 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0723 15:13:58.243316 3472620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0723 15:13:58.259885 3472620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3323080.pem && ln -fs /usr/share/ca-certificates/3323080.pem /etc/ssl/certs/3323080.pem"
	I0723 15:13:58.270598 3472620 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3323080.pem
	I0723 15:13:58.274548 3472620 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 23 14:38 /usr/share/ca-certificates/3323080.pem
	I0723 15:13:58.274607 3472620 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3323080.pem
	I0723 15:13:58.281784 3472620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3323080.pem /etc/ssl/certs/51391683.0"
	I0723 15:13:58.294053 3472620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/33230802.pem && ln -fs /usr/share/ca-certificates/33230802.pem /etc/ssl/certs/33230802.pem"
	I0723 15:13:58.304185 3472620 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/33230802.pem
	I0723 15:13:58.307798 3472620 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 23 14:38 /usr/share/ca-certificates/33230802.pem
	I0723 15:13:58.307865 3472620 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/33230802.pem
	I0723 15:13:58.314838 3472620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/33230802.pem /etc/ssl/certs/3ec20f2e.0"
	I0723 15:13:58.326602 3472620 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0723 15:13:58.333897 3472620 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0723 15:13:58.343025 3472620 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0723 15:13:58.355954 3472620 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0723 15:13:58.364341 3472620 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0723 15:13:58.377147 3472620 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0723 15:13:58.399156 3472620 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0723 15:13:58.417388 3472620 kubeadm.go:392] StartCluster: {Name:pause-864402 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:pause-864402 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-cr
eds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 15:13:58.417511 3472620 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0723 15:13:58.417589 3472620 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0723 15:13:58.481361 3472620 cri.go:89] found id: "5bb451f2e0e162d68efcb7265c7e11da69737651fdf37192d43ee0611a4c436b"
	I0723 15:13:58.481380 3472620 cri.go:89] found id: "394ad9f8053742d9dc87a97077c6f8e44517d4ee9df66d07cff51474e383965d"
	I0723 15:13:58.481386 3472620 cri.go:89] found id: "99c9ceed64ad3d10f975b558f86293c29bbf182265232485f460e6b789992171"
	I0723 15:13:58.481390 3472620 cri.go:89] found id: "8d96fb8d9b0ca0dd8ff886b3a4e820fed4fa1288ac3598a06d3c7f8afe619e99"
	I0723 15:13:58.481393 3472620 cri.go:89] found id: "13ab1eaf0c3e0468ddfefff14c5090a6b45a71823d387f7c663baa763b43dbda"
	I0723 15:13:58.481396 3472620 cri.go:89] found id: "682d8a34beb32bda7d6f2593fd0f524b5b14574b91f62b4086d24ed7891e3ea6"
	I0723 15:13:58.481399 3472620 cri.go:89] found id: "e092a00049200538fdd8d04a30c9fe7e039a5edac85687675878c085dc5d14a0"
	I0723 15:13:58.481403 3472620 cri.go:89] found id: "f69c46d26d552fcd64207b6dd16a65aa163aec359db25d9f540d40ebcf724a21"
	I0723 15:13:58.481406 3472620 cri.go:89] found id: "2ab671d3c2c489377ad82bf8c5a3897ffe297d7fa42d663d3b6a81f5edae2f0c"
	I0723 15:13:58.481415 3472620 cri.go:89] found id: "42e151d36a512ec37fa96e308c539b09fec09053a664e3849680f927f6452a2c"
	I0723 15:13:58.481418 3472620 cri.go:89] found id: "56a202d711bd0cb7058cc944a7bc7f5e3da05738e3963343fd9ed69ce506f333"
	I0723 15:13:58.481421 3472620 cri.go:89] found id: "0de9dcbc9d41bf6080842f82238b88f2e50689901abb8b333766b2dfa0e63a55"
	I0723 15:13:58.481424 3472620 cri.go:89] found id: "b203059c3c1438dfd31a735bfcf31379d96d975eec700982ccca07d52f6dc740"
	I0723 15:13:58.481428 3472620 cri.go:89] found id: "ea06788641402edebc224a5a98dec557723cdf365b7fcd0c05b20e74207c02ed"
	I0723 15:13:58.481433 3472620 cri.go:89] found id: "de67f38f84571a6ff8218d47a832510c1ddcb92ac9d51bfff7edb8e4d82ca409"
	I0723 15:13:58.481436 3472620 cri.go:89] found id: "64765c244fad2609e102831ecd2f0bc2856d178e82298ec0f01ed7d1ea631562"
	I0723 15:13:58.481439 3472620 cri.go:89] found id: ""
	I0723 15:13:58.481497 3472620 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 23 15:13:57 pause-864402 crio[2610]: time="2024-07-23 15:13:57.795145741Z" level=info msg="Started container" PID=2816 containerID=e092a00049200538fdd8d04a30c9fe7e039a5edac85687675878c085dc5d14a0 description=kube-system/kube-proxy-vjl8m/kube-proxy id=932c7296-6494-4b9f-980a-a78ef6a59f02 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5953e88cccb50d35924fc6530fe7e19f24d6ba8159af17501564df01065dce2a
	Jul 23 15:13:57 pause-864402 crio[2610]: time="2024-07-23 15:13:57.821625082Z" level=info msg="Started container" PID=2929 containerID=394ad9f8053742d9dc87a97077c6f8e44517d4ee9df66d07cff51474e383965d description=kube-system/coredns-7db6d8ff4d-8rdhr/coredns id=32d74735-81aa-45a9-aed9-2171fa3275f0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b823c1ea53fb3194db4599042c018f8ce3acfe7835dbf7685b888fe88bfb67cd
	Jul 23 15:13:57 pause-864402 crio[2610]: time="2024-07-23 15:13:57.824323411Z" level=info msg="Started container" PID=2912 containerID=5bb451f2e0e162d68efcb7265c7e11da69737651fdf37192d43ee0611a4c436b description=kube-system/kube-scheduler-pause-864402/kube-scheduler id=d076ce5c-87ec-4d65-883e-69ac2d6c6c48 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f0f78f96174f739f36ff60205abc1e8fa8c0cef7060d3c9fe738e48afc6050db
	Jul 23 15:13:57 pause-864402 crio[2610]: time="2024-07-23 15:13:57.826483637Z" level=info msg="Created container 13ab1eaf0c3e0468ddfefff14c5090a6b45a71823d387f7c663baa763b43dbda: kube-system/kindnet-tzchc/kindnet-cni" id=f0be5b59-7c29-414b-b340-2ca8f7f0e43c name=/runtime.v1.RuntimeService/CreateContainer
	Jul 23 15:13:57 pause-864402 crio[2610]: time="2024-07-23 15:13:57.827742475Z" level=info msg="Starting container: 13ab1eaf0c3e0468ddfefff14c5090a6b45a71823d387f7c663baa763b43dbda" id=499602d9-0d4f-4507-ab37-6354af351918 name=/runtime.v1.RuntimeService/StartContainer
	Jul 23 15:13:57 pause-864402 crio[2610]: time="2024-07-23 15:13:57.860958495Z" level=info msg="Created container 99c9ceed64ad3d10f975b558f86293c29bbf182265232485f460e6b789992171: kube-system/kube-apiserver-pause-864402/kube-apiserver" id=99470ce5-4ff1-4b25-8f5c-b9b1d3228356 name=/runtime.v1.RuntimeService/CreateContainer
	Jul 23 15:13:57 pause-864402 crio[2610]: time="2024-07-23 15:13:57.861597440Z" level=info msg="Starting container: 99c9ceed64ad3d10f975b558f86293c29bbf182265232485f460e6b789992171" id=dbcc5edb-e4c4-46f4-bbe0-9cc536f62fae name=/runtime.v1.RuntimeService/StartContainer
	Jul 23 15:13:57 pause-864402 crio[2610]: time="2024-07-23 15:13:57.881209705Z" level=info msg="Started container" PID=2835 containerID=682d8a34beb32bda7d6f2593fd0f524b5b14574b91f62b4086d24ed7891e3ea6 description=kube-system/coredns-7db6d8ff4d-cw9s5/coredns id=601ea9f1-093f-431c-807f-c1986dd8a41c name=/runtime.v1.RuntimeService/StartContainer sandboxID=8c597616dae659012d895bcfd524f7efb69bd753e335d0448311cf212f6eb3dd
	Jul 23 15:13:57 pause-864402 crio[2610]: time="2024-07-23 15:13:57.888384409Z" level=info msg="Started container" PID=2889 containerID=99c9ceed64ad3d10f975b558f86293c29bbf182265232485f460e6b789992171 description=kube-system/kube-apiserver-pause-864402/kube-apiserver id=dbcc5edb-e4c4-46f4-bbe0-9cc536f62fae name=/runtime.v1.RuntimeService/StartContainer sandboxID=81dad9efc6edc7e0ea5e57ae152c4661614b90f808894dcb2f81fada4f35228a
	Jul 23 15:13:57 pause-864402 crio[2610]: time="2024-07-23 15:13:57.897552988Z" level=info msg="Started container" PID=2880 containerID=13ab1eaf0c3e0468ddfefff14c5090a6b45a71823d387f7c663baa763b43dbda description=kube-system/kindnet-tzchc/kindnet-cni id=499602d9-0d4f-4507-ab37-6354af351918 name=/runtime.v1.RuntimeService/StartContainer sandboxID=33a658016d0639adc1c491fc1eddfb7d3775590ecad4db5f60636ebde63724d5
	Jul 23 15:13:58 pause-864402 crio[2610]: time="2024-07-23 15:13:58.078675685Z" level=info msg="Created container 8d96fb8d9b0ca0dd8ff886b3a4e820fed4fa1288ac3598a06d3c7f8afe619e99: kube-system/etcd-pause-864402/etcd" id=fad10261-1cb5-4a02-ba44-49b1477f4a50 name=/runtime.v1.RuntimeService/CreateContainer
	Jul 23 15:13:58 pause-864402 crio[2610]: time="2024-07-23 15:13:58.079317461Z" level=info msg="Starting container: 8d96fb8d9b0ca0dd8ff886b3a4e820fed4fa1288ac3598a06d3c7f8afe619e99" id=4425f09a-69ee-42f5-a5aa-6f1d52882b32 name=/runtime.v1.RuntimeService/StartContainer
	Jul 23 15:13:58 pause-864402 crio[2610]: time="2024-07-23 15:13:58.097674605Z" level=info msg="Started container" PID=2928 containerID=8d96fb8d9b0ca0dd8ff886b3a4e820fed4fa1288ac3598a06d3c7f8afe619e99 description=kube-system/etcd-pause-864402/etcd id=4425f09a-69ee-42f5-a5aa-6f1d52882b32 name=/runtime.v1.RuntimeService/StartContainer sandboxID=aa8d32346823da69d2749bba359150c88507f142a04127ea28d61e5db92d6d94
	Jul 23 15:14:08 pause-864402 crio[2610]: time="2024-07-23 15:14:08.629372939Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": CREATE"
	Jul 23 15:14:08 pause-864402 crio[2610]: time="2024-07-23 15:14:08.668735962Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jul 23 15:14:08 pause-864402 crio[2610]: time="2024-07-23 15:14:08.668767248Z" level=info msg="Updated default CNI network name to kindnet"
	Jul 23 15:14:08 pause-864402 crio[2610]: time="2024-07-23 15:14:08.668783133Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	Jul 23 15:14:08 pause-864402 crio[2610]: time="2024-07-23 15:14:08.672711936Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jul 23 15:14:08 pause-864402 crio[2610]: time="2024-07-23 15:14:08.672747177Z" level=info msg="Updated default CNI network name to kindnet"
	Jul 23 15:14:08 pause-864402 crio[2610]: time="2024-07-23 15:14:08.672762258Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": RENAME"
	Jul 23 15:14:08 pause-864402 crio[2610]: time="2024-07-23 15:14:08.676426099Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jul 23 15:14:08 pause-864402 crio[2610]: time="2024-07-23 15:14:08.676456048Z" level=info msg="Updated default CNI network name to kindnet"
	Jul 23 15:14:08 pause-864402 crio[2610]: time="2024-07-23 15:14:08.676471507Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	Jul 23 15:14:08 pause-864402 crio[2610]: time="2024-07-23 15:14:08.688009872Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jul 23 15:14:08 pause-864402 crio[2610]: time="2024-07-23 15:14:08.688047017Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	5bb451f2e0e16       d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355                                     14 seconds ago       Running             kube-scheduler            1                   f0f78f96174f7       kube-scheduler-pause-864402
	394ad9f805374       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93                                     14 seconds ago       Running             coredns                   1                   b823c1ea53fb3       coredns-7db6d8ff4d-8rdhr
	99c9ceed64ad3       61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca                                     14 seconds ago       Running             kube-apiserver            1                   81dad9efc6edc       kube-apiserver-pause-864402
	8d96fb8d9b0ca       014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd                                     14 seconds ago       Running             etcd                      1                   aa8d32346823d       etcd-pause-864402
	13ab1eaf0c3e0       f42786f8afd2214fc59fbf9a26531806f562488d4a7d7a31e8b5e9ff6289b800                                     14 seconds ago       Running             kindnet-cni               1                   33a658016d063       kindnet-tzchc
	682d8a34beb32       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93                                     14 seconds ago       Running             coredns                   1                   8c597616dae65       coredns-7db6d8ff4d-cw9s5
	e092a00049200       2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be                                     14 seconds ago       Running             kube-proxy                1                   5953e88cccb50       kube-proxy-vjl8m
	f69c46d26d552       8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a                                     14 seconds ago       Running             kube-controller-manager   1                   82d4596098e97       kube-controller-manager-pause-864402
	2ab671d3c2c48       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93                                     27 seconds ago       Exited              coredns                   0                   b823c1ea53fb3       coredns-7db6d8ff4d-8rdhr
	42e151d36a512       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93                                     27 seconds ago       Exited              coredns                   0                   8c597616dae65       coredns-7db6d8ff4d-cw9s5
	56a202d711bd0       docker.io/kindest/kindnetd@sha256:14100a3a7aca6cad3de3f26ee342ad937ca7d2844b1983d3baa7bf5f491baa7a   40 seconds ago       Exited              kindnet-cni               0                   33a658016d063       kindnet-tzchc
	0de9dcbc9d41b       2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be                                     41 seconds ago       Exited              kube-proxy                0                   5953e88cccb50       kube-proxy-vjl8m
	b203059c3c143       014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd                                     About a minute ago   Exited              etcd                      0                   aa8d32346823d       etcd-pause-864402
	ea06788641402       8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a                                     About a minute ago   Exited              kube-controller-manager   0                   82d4596098e97       kube-controller-manager-pause-864402
	de67f38f84571       61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca                                     About a minute ago   Exited              kube-apiserver            0                   81dad9efc6edc       kube-apiserver-pause-864402
	64765c244fad2       d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355                                     About a minute ago   Exited              kube-scheduler            0                   f0f78f96174f7       kube-scheduler-pause-864402
	
	
	==> coredns [2ab671d3c2c489377ad82bf8c5a3897ffe297d7fa42d663d3b6a81f5edae2f0c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:52568 - 11110 "HINFO IN 7062363414147552866.5651338380928459855. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.017081376s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [394ad9f8053742d9dc87a97077c6f8e44517d4ee9df66d07cff51474e383965d] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:40709 - 7715 "HINFO IN 7580601572474393587.5889626321627148573. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014420392s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [42e151d36a512ec37fa96e308c539b09fec09053a664e3849680f927f6452a2c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:37900 - 22275 "HINFO IN 5039388286640406195.5983277421212417471. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015056273s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [682d8a34beb32bda7d6f2593fd0f524b5b14574b91f62b4086d24ed7891e3ea6] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:35588 - 901 "HINFO IN 7215170534647923444.6385224591540909521. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.042996248s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               pause-864402
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-864402
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8c09ef4366b737aec8aaa2bbc590bde27da814a6
	                    minikube.k8s.io/name=pause-864402
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_23T15_13_16_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 23 Jul 2024 15:13:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-864402
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 23 Jul 2024 15:14:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 23 Jul 2024 15:13:47 +0000   Tue, 23 Jul 2024 15:13:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 23 Jul 2024 15:13:47 +0000   Tue, 23 Jul 2024 15:13:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 23 Jul 2024 15:13:47 +0000   Tue, 23 Jul 2024 15:13:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 23 Jul 2024 15:13:47 +0000   Tue, 23 Jul 2024 15:13:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-864402
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	System Info:
	  Machine ID:                 995abe7ee3c6458997f438010e42ea21
	  System UUID:                a9197847-48c0-415a-ba8a-c9f0f4e811c3
	  Boot ID:                    95e04985-bf92-47a1-9b5b-7f09371b9e30
	  Kernel Version:             5.15.0-1065-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-8rdhr                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     43s
	  kube-system                 coredns-7db6d8ff4d-cw9s5                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     43s
	  kube-system                 etcd-pause-864402                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         56s
	  kube-system                 kindnet-tzchc                           100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      44s
	  kube-system                 kube-apiserver-pause-864402             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         56s
	  kube-system                 kube-controller-manager-pause-864402    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         57s
	  kube-system                 kube-proxy-vjl8m                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         44s
	  kube-system                 kube-scheduler-pause-864402             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%!)(MISSING)  100m (5%!)(MISSING)
	  memory             290Mi (3%!)(MISSING)  390Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 42s                kube-proxy       
	  Normal  Starting                 7s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  68s (x8 over 68s)  kubelet          Node pause-864402 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    68s (x8 over 68s)  kubelet          Node pause-864402 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     68s (x8 over 68s)  kubelet          Node pause-864402 status is now: NodeHasSufficientPID
	  Normal  Starting                 57s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  56s                kubelet          Node pause-864402 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    56s                kubelet          Node pause-864402 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     56s                kubelet          Node pause-864402 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           44s                node-controller  Node pause-864402 event: Registered Node pause-864402 in Controller
	  Normal  NodeReady                30s                kubelet          Node pause-864402 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.001118] FS-Cache: O-key=[8] '1a733b0000000000'
	[  +0.000744] FS-Cache: N-cookie c=000000e4 [p=000000db fl=2 nc=0 na=1]
	[  +0.000993] FS-Cache: N-cookie d=00000000a817b499{9p.inode} n=00000000df96581b
	[  +0.001091] FS-Cache: N-key=[8] '1a733b0000000000'
	[  +0.003238] FS-Cache: Duplicate cookie detected
	[  +0.000706] FS-Cache: O-cookie c=000000de [p=000000db fl=226 nc=0 na=1]
	[  +0.001048] FS-Cache: O-cookie d=00000000a817b499{9p.inode} n=00000000a77d32c1
	[  +0.001108] FS-Cache: O-key=[8] '1a733b0000000000'
	[  +0.000748] FS-Cache: N-cookie c=000000e5 [p=000000db fl=2 nc=0 na=1]
	[  +0.000997] FS-Cache: N-cookie d=00000000a817b499{9p.inode} n=000000003ba586a5
	[  +0.001137] FS-Cache: N-key=[8] '1a733b0000000000'
	[  +2.731848] FS-Cache: Duplicate cookie detected
	[  +0.000728] FS-Cache: O-cookie c=000000dc [p=000000db fl=226 nc=0 na=1]
	[  +0.001029] FS-Cache: O-cookie d=00000000a817b499{9p.inode} n=00000000ee5383df
	[  +0.001099] FS-Cache: O-key=[8] '19733b0000000000'
	[  +0.000771] FS-Cache: N-cookie c=000000e7 [p=000000db fl=2 nc=0 na=1]
	[  +0.000987] FS-Cache: N-cookie d=00000000a817b499{9p.inode} n=00000000052ff0a6
	[  +0.001114] FS-Cache: N-key=[8] '19733b0000000000'
	[  +0.302039] FS-Cache: Duplicate cookie detected
	[  +0.000741] FS-Cache: O-cookie c=000000e1 [p=000000db fl=226 nc=0 na=1]
	[  +0.001026] FS-Cache: O-cookie d=00000000a817b499{9p.inode} n=000000001645a21b
	[  +0.001107] FS-Cache: O-key=[8] '1f733b0000000000'
	[  +0.000755] FS-Cache: N-cookie c=000000e8 [p=000000db fl=2 nc=0 na=1]
	[  +0.000972] FS-Cache: N-cookie d=00000000a817b499{9p.inode} n=00000000df96581b
	[  +0.001106] FS-Cache: N-key=[8] '1f733b0000000000'
	
	
	==> etcd [8d96fb8d9b0ca0dd8ff886b3a4e820fed4fa1288ac3598a06d3c7f8afe619e99] <==
	{"level":"info","ts":"2024-07-23T15:13:58.627408Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-23T15:13:58.651122Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-23T15:13:58.652018Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-23T15:13:58.652271Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-23T15:13:58.652354Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-23T15:13:58.652507Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2024-07-23T15:13:58.652542Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2024-07-23T15:13:58.661449Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2024-07-23T15:13:58.661575Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2024-07-23T15:13:58.66169Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-23T15:13:58.661759Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-23T15:14:00.366607Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-23T15:14:00.36674Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-23T15:14:00.366794Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2024-07-23T15:14:00.366836Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2024-07-23T15:14:00.366875Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2024-07-23T15:14:00.366914Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2024-07-23T15:14:00.36695Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2024-07-23T15:14:00.373545Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:pause-864402 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-23T15:14:00.373838Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-23T15:14:00.374191Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-23T15:14:00.374412Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-23T15:14:00.374471Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-23T15:14:00.376091Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2024-07-23T15:14:00.392114Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [b203059c3c1438dfd31a735bfcf31379d96d975eec700982ccca07d52f6dc740] <==
	{"level":"info","ts":"2024-07-23T15:13:06.218839Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2024-07-23T15:13:06.218875Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2024-07-23T15:13:06.21893Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2024-07-23T15:13:06.218966Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2024-07-23T15:13:06.226725Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:pause-864402 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-23T15:13:06.226838Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-23T15:13:06.227133Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-23T15:13:06.24823Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2024-07-23T15:13:06.248639Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-23T15:13:06.26888Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-23T15:13:06.269021Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-23T15:13:06.253317Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-23T15:13:06.25336Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-23T15:13:06.269287Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-23T15:13:06.277292Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-23T15:13:48.543118Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-23T15:13:48.545648Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"pause-864402","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"warn","ts":"2024-07-23T15:13:48.545798Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-23T15:13:48.550791Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-23T15:13:48.622045Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-23T15:13:48.622106Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-23T15:13:48.62218Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2024-07-23T15:13:48.623871Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2024-07-23T15:13:48.623988Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2024-07-23T15:13:48.624045Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"pause-864402","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> kernel <==
	 15:14:12 up 23:56,  0 users,  load average: 4.78, 2.54, 2.02
	Linux pause-864402 5.15.0-1065-aws #71~20.04.1-Ubuntu SMP Fri Jun 28 19:59:49 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [13ab1eaf0c3e0468ddfefff14c5090a6b45a71823d387f7c663baa763b43dbda] <==
	I0723 15:13:58.628890       1 controller.go:334] Starting controller kube-network-policies
	I0723 15:13:58.650599       1 controller.go:338] Waiting for informer caches to sync
	I0723 15:13:58.650686       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	W0723 15:14:04.900189       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0723 15:14:04.900310       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	W0723 15:14:04.900393       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0723 15:14:04.900438       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	W0723 15:14:04.900491       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0723 15:14:04.900541       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	W0723 15:14:05.748988       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0723 15:14:05.749108       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	W0723 15:14:05.816616       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0723 15:14:05.816747       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	W0723 15:14:06.036463       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0723 15:14:06.036506       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	W0723 15:14:07.606290       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0723 15:14:07.606337       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	W0723 15:14:08.019721       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0723 15:14:08.019763       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	W0723 15:14:08.336069       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0723 15:14:08.336107       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0723 15:14:08.628888       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0723 15:14:08.628937       1 main.go:299] handling current node
	W0723 15:14:12.242253       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0723 15:14:12.242349       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	
	
	==> kindnet [56a202d711bd0cb7058cc944a7bc7f5e3da05738e3963343fd9ed69ce506f333] <==
	E0723 15:13:32.034635       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	W0723 15:13:32.977310       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0723 15:13:32.977346       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	W0723 15:13:33.576268       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0723 15:13:33.576409       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	W0723 15:13:33.629966       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0723 15:13:33.630102       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	W0723 15:13:35.583691       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0723 15:13:35.583811       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	W0723 15:13:36.336731       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0723 15:13:36.336765       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	W0723 15:13:36.682181       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0723 15:13:36.682314       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	W0723 15:13:38.793142       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0723 15:13:38.793485       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	W0723 15:13:39.652891       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0723 15:13:39.653012       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	W0723 15:13:41.176091       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0723 15:13:41.176126       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0723 15:13:42.021645       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0723 15:13:42.021799       1 main.go:299] handling current node
	W0723 15:13:46.401839       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0723 15:13:46.402085       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	W0723 15:13:47.933579       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0723 15:13:47.933633       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	
	
	==> kube-apiserver [99c9ceed64ad3d10f975b558f86293c29bbf182265232485f460e6b789992171] <==
	I0723 15:14:04.554774       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0723 15:14:04.554814       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0723 15:14:04.555154       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0723 15:14:04.555285       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0723 15:14:04.535962       1 aggregator.go:163] waiting for initial CRD sync...
	I0723 15:14:04.710006       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0723 15:14:04.747482       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0723 15:14:04.824373       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0723 15:14:04.824459       1 policy_source.go:224] refreshing policies
	I0723 15:14:04.847848       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0723 15:14:04.852292       1 aggregator.go:165] initial CRD sync complete...
	I0723 15:14:04.852444       1 autoregister_controller.go:141] Starting autoregister controller
	I0723 15:14:04.852516       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0723 15:14:04.852550       1 cache.go:39] Caches are synced for autoregister controller
	I0723 15:14:04.882547       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0723 15:14:04.912736       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0723 15:14:04.997199       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0723 15:14:04.997419       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0723 15:14:05.010876       1 shared_informer.go:320] Caches are synced for configmaps
	I0723 15:14:05.011048       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0723 15:14:05.011105       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0723 15:14:05.012231       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0723 15:14:05.029381       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	E0723 15:14:05.050445       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0723 15:14:05.553659       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	
	
	==> kube-apiserver [de67f38f84571a6ff8218d47a832510c1ddcb92ac9d51bfff7edb8e4d82ca409] <==
	I0723 15:13:16.108844       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0723 15:13:28.896727       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0723 15:13:29.239845       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0723 15:13:48.542259       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	W0723 15:13:48.556590       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 15:13:48.556655       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 15:13:48.556705       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 15:13:48.556748       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 15:13:48.556785       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 15:13:48.556831       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 15:13:48.556871       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 15:13:48.556909       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 15:13:48.556942       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 15:13:48.556974       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 15:13:48.557470       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 15:13:48.557520       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 15:13:48.557570       1 logging.go:59] [core] [Channel #10 SubChannel #11] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 15:13:48.564470       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 15:13:48.564566       1 logging.go:59] [core] [Channel #178 SubChannel #179] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 15:13:48.564625       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 15:13:48.564677       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 15:13:48.564850       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 15:13:48.564910       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 15:13:48.564977       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0723 15:13:48.566030       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [ea06788641402edebc224a5a98dec557723cdf365b7fcd0c05b20e74207c02ed] <==
	I0723 15:13:28.439740       1 shared_informer.go:320] Caches are synced for expand
	I0723 15:13:28.449300       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0723 15:13:28.466888       1 shared_informer.go:320] Caches are synced for PVC protection
	I0723 15:13:28.475033       1 shared_informer.go:320] Caches are synced for stateful set
	I0723 15:13:28.490578       1 shared_informer.go:320] Caches are synced for persistent volume
	I0723 15:13:28.496895       1 shared_informer.go:320] Caches are synced for resource quota
	I0723 15:13:28.517424       1 shared_informer.go:320] Caches are synced for ephemeral
	I0723 15:13:28.520897       1 shared_informer.go:320] Caches are synced for attach detach
	I0723 15:13:28.545594       1 shared_informer.go:320] Caches are synced for resource quota
	I0723 15:13:29.004366       1 shared_informer.go:320] Caches are synced for garbage collector
	I0723 15:13:29.004456       1 shared_informer.go:320] Caches are synced for garbage collector
	I0723 15:13:29.004468       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0723 15:13:29.465782       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="221.207829ms"
	I0723 15:13:29.479679       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="13.843366ms"
	I0723 15:13:29.479759       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="44.537µs"
	I0723 15:13:42.161301       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="97.724µs"
	I0723 15:13:42.174250       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="51.069µs"
	I0723 15:13:42.190419       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="117.646µs"
	I0723 15:13:42.209146       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="97.969µs"
	I0723 15:13:43.370747       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0723 15:13:45.163856       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="176.034µs"
	I0723 15:13:45.257032       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="42.724402ms"
	I0723 15:13:45.259453       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="123.849µs"
	I0723 15:13:45.309986       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="14.398457ms"
	I0723 15:13:45.310236       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="62.688µs"
	
	
	==> kube-controller-manager [f69c46d26d552fcd64207b6dd16a65aa163aec359db25d9f540d40ebcf724a21] <==
	I0723 15:14:06.899444       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0723 15:14:06.899500       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0723 15:14:06.905938       1 shared_informer.go:320] Caches are synced for tokens
	I0723 15:14:06.910675       1 controllermanager.go:761] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0723 15:14:06.911166       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0723 15:14:06.912753       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0723 15:14:06.916012       1 controllermanager.go:761] "Started controller" controller="ephemeral-volume-controller"
	I0723 15:14:06.916556       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0723 15:14:06.916610       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0723 15:14:06.920830       1 controllermanager.go:761] "Started controller" controller="endpointslice-controller"
	I0723 15:14:06.921136       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0723 15:14:06.921287       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0723 15:14:06.925478       1 controllermanager.go:761] "Started controller" controller="cronjob-controller"
	I0723 15:14:06.926127       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0723 15:14:06.926195       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0723 15:14:06.929714       1 controllermanager.go:761] "Started controller" controller="ttl-controller"
	I0723 15:14:06.929842       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0723 15:14:06.929884       1 controllermanager.go:739] "Warning: skipping controller" controller="node-route-controller"
	I0723 15:14:06.929817       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0723 15:14:06.930631       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0723 15:14:06.936831       1 controllermanager.go:761] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0723 15:14:06.937006       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0723 15:14:06.937211       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0723 15:14:06.940525       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0723 15:14:06.941291       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	
	
	==> kube-proxy [0de9dcbc9d41bf6080842f82238b88f2e50689901abb8b333766b2dfa0e63a55] <==
	I0723 15:13:29.986786       1 server_linux.go:69] "Using iptables proxy"
	I0723 15:13:30.002324       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.76.2"]
	I0723 15:13:30.044342       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0723 15:13:30.044485       1 server_linux.go:165] "Using iptables Proxier"
	I0723 15:13:30.047603       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0723 15:13:30.047635       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0723 15:13:30.047666       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0723 15:13:30.047919       1 server.go:872] "Version info" version="v1.30.3"
	I0723 15:13:30.047945       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0723 15:13:30.049890       1 config.go:192] "Starting service config controller"
	I0723 15:13:30.050137       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0723 15:13:30.050472       1 config.go:319] "Starting node config controller"
	I0723 15:13:30.052198       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0723 15:13:30.052428       1 config.go:101] "Starting endpoint slice config controller"
	I0723 15:13:30.053039       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0723 15:13:30.150910       1 shared_informer.go:320] Caches are synced for service config
	I0723 15:13:30.154084       1 shared_informer.go:320] Caches are synced for node config
	I0723 15:13:30.154111       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [e092a00049200538fdd8d04a30c9fe7e039a5edac85687675878c085dc5d14a0] <==
	I0723 15:14:03.656702       1 server_linux.go:69] "Using iptables proxy"
	I0723 15:14:04.925344       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.76.2"]
	I0723 15:14:05.078062       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0723 15:14:05.078120       1 server_linux.go:165] "Using iptables Proxier"
	I0723 15:14:05.079690       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0723 15:14:05.079715       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0723 15:14:05.079743       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0723 15:14:05.079954       1 server.go:872] "Version info" version="v1.30.3"
	I0723 15:14:05.079965       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0723 15:14:05.081108       1 config.go:192] "Starting service config controller"
	I0723 15:14:05.081132       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0723 15:14:05.081166       1 config.go:101] "Starting endpoint slice config controller"
	I0723 15:14:05.081176       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0723 15:14:05.082134       1 config.go:319] "Starting node config controller"
	I0723 15:14:05.082154       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0723 15:14:05.182166       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0723 15:14:05.182240       1 shared_informer.go:320] Caches are synced for node config
	I0723 15:14:05.182169       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [5bb451f2e0e162d68efcb7265c7e11da69737651fdf37192d43ee0611a4c436b] <==
	I0723 15:14:02.304098       1 serving.go:380] Generated self-signed cert in-memory
	W0723 15:14:04.782882       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0723 15:14:04.782978       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0723 15:14:04.783013       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0723 15:14:04.783044       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0723 15:14:04.901473       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0723 15:14:04.902736       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0723 15:14:04.907250       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0723 15:14:04.910104       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0723 15:14:04.922958       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0723 15:14:04.910128       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0723 15:14:05.023553       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [64765c244fad2609e102831ecd2f0bc2856d178e82298ec0f01ed7d1ea631562] <==
	W0723 15:13:13.976208       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0723 15:13:13.976260       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0723 15:13:14.011473       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0723 15:13:14.011604       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0723 15:13:14.021010       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0723 15:13:14.021130       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0723 15:13:14.133670       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0723 15:13:14.133781       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0723 15:13:14.285130       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0723 15:13:14.285260       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0723 15:13:14.321619       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0723 15:13:14.321751       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0723 15:13:14.333812       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0723 15:13:14.333937       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0723 15:13:14.383375       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0723 15:13:14.383507       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0723 15:13:14.491159       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0723 15:13:14.491345       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0723 15:13:14.491231       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0723 15:13:14.491442       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0723 15:13:14.728481       1 reflector.go:547] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0723 15:13:14.728635       1 reflector.go:150] runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0723 15:13:17.547800       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0723 15:13:48.539584       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0723 15:13:48.539693       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 23 15:13:58 pause-864402 kubelet[1590]: I0723 15:13:58.285582    1590 status_manager.go:853] "Failed to get status for pod" podUID="e20d03de-bb10-43db-ae8b-154cad292ccd" pod="kube-system/kindnet-tzchc" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kindnet-tzchc\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 23 15:13:58 pause-864402 kubelet[1590]: I0723 15:13:58.285847    1590 status_manager.go:853] "Failed to get status for pod" podUID="814d4c9e-fdfa-45cf-a6d0-2bdfd7e172f4" pod="kube-system/coredns-7db6d8ff4d-8rdhr" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8rdhr\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 23 15:13:58 pause-864402 kubelet[1590]: I0723 15:13:58.286064    1590 status_manager.go:853] "Failed to get status for pod" podUID="a7196210-77e4-4a04-ace3-2a2f4ffca408" pod="kube-system/coredns-7db6d8ff4d-cw9s5" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-cw9s5\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 23 15:13:58 pause-864402 kubelet[1590]: I0723 15:13:58.286256    1590 status_manager.go:853] "Failed to get status for pod" podUID="92b5fbd752cb11389a0e6c5cfdad3f14" pod="kube-system/kube-scheduler-pause-864402" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-864402\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 23 15:13:58 pause-864402 kubelet[1590]: I0723 15:13:58.286454    1590 status_manager.go:853] "Failed to get status for pod" podUID="9e4724bd4603eae1502167ee3056854a" pod="kube-system/etcd-pause-864402" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/etcd-pause-864402\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 23 15:13:58 pause-864402 kubelet[1590]: I0723 15:13:58.286657    1590 status_manager.go:853] "Failed to get status for pod" podUID="cb8d092b9aeb2ec0ae14ddf2e642ed10" pod="kube-system/kube-controller-manager-pause-864402" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-864402\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 23 15:13:58 pause-864402 kubelet[1590]: I0723 15:13:58.286880    1590 status_manager.go:853] "Failed to get status for pod" podUID="24283b91b44a43cf7fec0a766c7718cd" pod="kube-system/kube-apiserver-pause-864402" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-864402\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 23 15:13:58 pause-864402 kubelet[1590]: I0723 15:13:58.288409    1590 status_manager.go:853] "Failed to get status for pod" podUID="814d4c9e-fdfa-45cf-a6d0-2bdfd7e172f4" pod="kube-system/coredns-7db6d8ff4d-8rdhr" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8rdhr\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 23 15:13:58 pause-864402 kubelet[1590]: I0723 15:13:58.288643    1590 status_manager.go:853] "Failed to get status for pod" podUID="a7196210-77e4-4a04-ace3-2a2f4ffca408" pod="kube-system/coredns-7db6d8ff4d-cw9s5" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-cw9s5\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 23 15:13:58 pause-864402 kubelet[1590]: I0723 15:13:58.288879    1590 status_manager.go:853] "Failed to get status for pod" podUID="92b5fbd752cb11389a0e6c5cfdad3f14" pod="kube-system/kube-scheduler-pause-864402" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-864402\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 23 15:13:58 pause-864402 kubelet[1590]: I0723 15:13:58.289249    1590 status_manager.go:853] "Failed to get status for pod" podUID="9e4724bd4603eae1502167ee3056854a" pod="kube-system/etcd-pause-864402" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/etcd-pause-864402\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 23 15:13:58 pause-864402 kubelet[1590]: I0723 15:13:58.289567    1590 status_manager.go:853] "Failed to get status for pod" podUID="cb8d092b9aeb2ec0ae14ddf2e642ed10" pod="kube-system/kube-controller-manager-pause-864402" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-864402\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 23 15:13:58 pause-864402 kubelet[1590]: I0723 15:13:58.289765    1590 status_manager.go:853] "Failed to get status for pod" podUID="24283b91b44a43cf7fec0a766c7718cd" pod="kube-system/kube-apiserver-pause-864402" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-864402\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 23 15:13:58 pause-864402 kubelet[1590]: I0723 15:13:58.289950    1590 status_manager.go:853] "Failed to get status for pod" podUID="d15badef-bb4d-428a-9402-5dd73f507db1" pod="kube-system/kube-proxy-vjl8m" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vjl8m\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 23 15:13:58 pause-864402 kubelet[1590]: I0723 15:13:58.290129    1590 status_manager.go:853] "Failed to get status for pod" podUID="e20d03de-bb10-43db-ae8b-154cad292ccd" pod="kube-system/kindnet-tzchc" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kindnet-tzchc\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 23 15:13:58 pause-864402 kubelet[1590]: I0723 15:13:58.292965    1590 status_manager.go:853] "Failed to get status for pod" podUID="814d4c9e-fdfa-45cf-a6d0-2bdfd7e172f4" pod="kube-system/coredns-7db6d8ff4d-8rdhr" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8rdhr\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 23 15:13:58 pause-864402 kubelet[1590]: I0723 15:13:58.293197    1590 status_manager.go:853] "Failed to get status for pod" podUID="a7196210-77e4-4a04-ace3-2a2f4ffca408" pod="kube-system/coredns-7db6d8ff4d-cw9s5" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-cw9s5\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 23 15:13:58 pause-864402 kubelet[1590]: I0723 15:13:58.293493    1590 status_manager.go:853] "Failed to get status for pod" podUID="92b5fbd752cb11389a0e6c5cfdad3f14" pod="kube-system/kube-scheduler-pause-864402" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-864402\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 23 15:13:58 pause-864402 kubelet[1590]: I0723 15:13:58.293781    1590 status_manager.go:853] "Failed to get status for pod" podUID="9e4724bd4603eae1502167ee3056854a" pod="kube-system/etcd-pause-864402" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/etcd-pause-864402\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 23 15:13:58 pause-864402 kubelet[1590]: I0723 15:13:58.293995    1590 status_manager.go:853] "Failed to get status for pod" podUID="cb8d092b9aeb2ec0ae14ddf2e642ed10" pod="kube-system/kube-controller-manager-pause-864402" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-864402\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 23 15:13:58 pause-864402 kubelet[1590]: I0723 15:13:58.294183    1590 status_manager.go:853] "Failed to get status for pod" podUID="24283b91b44a43cf7fec0a766c7718cd" pod="kube-system/kube-apiserver-pause-864402" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-864402\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 23 15:13:58 pause-864402 kubelet[1590]: I0723 15:13:58.294454    1590 status_manager.go:853] "Failed to get status for pod" podUID="d15badef-bb4d-428a-9402-5dd73f507db1" pod="kube-system/kube-proxy-vjl8m" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vjl8m\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 23 15:13:58 pause-864402 kubelet[1590]: I0723 15:13:58.294914    1590 status_manager.go:853] "Failed to get status for pod" podUID="e20d03de-bb10-43db-ae8b-154cad292ccd" pod="kube-system/kindnet-tzchc" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kindnet-tzchc\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 23 15:14:06 pause-864402 kubelet[1590]: W0723 15:14:06.159408    1590 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Jul 23 15:14:06 pause-864402 kubelet[1590]: W0723 15:14:06.160566    1590 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0723 15:14:11.057629 3475196 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19319-3317687/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-864402 -n pause-864402
helpers_test.go:261: (dbg) Run:  kubectl --context pause-864402 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (26.84s)

                                                
                                    

Test pass (293/330)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 8.89
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.2
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.30.3/json-events 9.14
13 TestDownloadOnly/v1.30.3/preload-exists 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.07
18 TestDownloadOnly/v1.30.3/DeleteAll 0.19
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.13
21 TestDownloadOnly/v1.31.0-beta.0/json-events 7.62
22 TestDownloadOnly/v1.31.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.31.0-beta.0/LogsDuration 0.07
27 TestDownloadOnly/v1.31.0-beta.0/DeleteAll 0.2
28 TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds 0.14
30 TestBinaryMirror 0.53
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
36 TestAddons/Setup 221.34
38 TestAddons/parallel/Registry 15.48
40 TestAddons/parallel/InspektorGadget 11.77
44 TestAddons/parallel/CSI 33.03
45 TestAddons/parallel/Headlamp 10.96
46 TestAddons/parallel/CloudSpanner 6.58
47 TestAddons/parallel/LocalPath 66.31
48 TestAddons/parallel/NvidiaDevicePlugin 6.53
49 TestAddons/parallel/Yakd 5
53 TestAddons/serial/GCPAuth/Namespaces 0.17
54 TestAddons/StoppedEnableDisable 12.16
55 TestCertOptions 40.9
56 TestCertExpiration 247.45
58 TestForceSystemdFlag 41.63
59 TestForceSystemdEnv 39.38
65 TestErrorSpam/setup 30.44
66 TestErrorSpam/start 0.72
67 TestErrorSpam/status 0.96
68 TestErrorSpam/pause 1.63
69 TestErrorSpam/unpause 1.74
70 TestErrorSpam/stop 1.42
73 TestFunctional/serial/CopySyncFile 0
74 TestFunctional/serial/StartWithProxy 57.21
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 29.02
77 TestFunctional/serial/KubeContext 0.08
78 TestFunctional/serial/KubectlGetPods 0.09
81 TestFunctional/serial/CacheCmd/cache/add_remote 4.16
82 TestFunctional/serial/CacheCmd/cache/add_local 1.12
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
84 TestFunctional/serial/CacheCmd/cache/list 0.06
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
86 TestFunctional/serial/CacheCmd/cache/cache_reload 1.94
87 TestFunctional/serial/CacheCmd/cache/delete 0.11
88 TestFunctional/serial/MinikubeKubectlCmd 0.13
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
90 TestFunctional/serial/ExtraConfig 36.1
91 TestFunctional/serial/ComponentHealth 0.1
92 TestFunctional/serial/LogsCmd 1.73
93 TestFunctional/serial/LogsFileCmd 1.81
94 TestFunctional/serial/InvalidService 4.25
96 TestFunctional/parallel/ConfigCmd 0.44
97 TestFunctional/parallel/DashboardCmd 8.19
98 TestFunctional/parallel/DryRun 0.41
99 TestFunctional/parallel/InternationalLanguage 0.19
100 TestFunctional/parallel/StatusCmd 0.96
104 TestFunctional/parallel/ServiceCmdConnect 11.55
105 TestFunctional/parallel/AddonsCmd 0.16
108 TestFunctional/parallel/SSHCmd 0.67
109 TestFunctional/parallel/CpCmd 2.29
111 TestFunctional/parallel/FileSync 0.25
112 TestFunctional/parallel/CertSync 1.58
116 TestFunctional/parallel/NodeLabels 0.08
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0.51
120 TestFunctional/parallel/License 0.28
122 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.6
123 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.49
126 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.12
127 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
131 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
132 TestFunctional/parallel/ServiceCmd/DeployApp 7.2
133 TestFunctional/parallel/ServiceCmd/List 0.49
134 TestFunctional/parallel/ServiceCmd/JSONOutput 0.5
135 TestFunctional/parallel/ServiceCmd/HTTPS 0.36
136 TestFunctional/parallel/ServiceCmd/Format 0.36
137 TestFunctional/parallel/ServiceCmd/URL 0.37
138 TestFunctional/parallel/ProfileCmd/profile_not_create 0.41
139 TestFunctional/parallel/ProfileCmd/profile_list 0.38
140 TestFunctional/parallel/ProfileCmd/profile_json_output 0.4
141 TestFunctional/parallel/MountCmd/any-port 21.61
142 TestFunctional/parallel/MountCmd/specific-port 1.85
143 TestFunctional/parallel/MountCmd/VerifyCleanup 1.83
144 TestFunctional/parallel/Version/short 0.05
145 TestFunctional/parallel/Version/components 0.98
146 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
147 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
148 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
149 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
150 TestFunctional/parallel/ImageCommands/ImageBuild 2.5
151 TestFunctional/parallel/ImageCommands/Setup 0.73
152 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.25
153 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.9
154 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.13
155 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.53
156 TestFunctional/parallel/ImageCommands/ImageRemove 0.47
157 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.76
158 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.57
159 TestFunctional/parallel/UpdateContextCmd/no_changes 0.15
160 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.14
161 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.14
162 TestFunctional/delete_echo-server_images 0.04
163 TestFunctional/delete_my-image_image 0.02
164 TestFunctional/delete_minikube_cached_images 0.02
168 TestMultiControlPlane/serial/StartCluster 188.94
169 TestMultiControlPlane/serial/DeployApp 7.87
170 TestMultiControlPlane/serial/PingHostFromPods 1.59
171 TestMultiControlPlane/serial/AddWorkerNode 35.39
172 TestMultiControlPlane/serial/NodeLabels 0.1
173 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.72
174 TestMultiControlPlane/serial/CopyFile 18.31
175 TestMultiControlPlane/serial/StopSecondaryNode 12.69
176 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.54
177 TestMultiControlPlane/serial/RestartSecondaryNode 32.97
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.91
179 TestMultiControlPlane/serial/RestartClusterKeepsNodes 172.49
180 TestMultiControlPlane/serial/DeleteSecondaryNode 13.01
181 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.49
182 TestMultiControlPlane/serial/StopCluster 35.75
183 TestMultiControlPlane/serial/RestartCluster 120.55
184 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.55
185 TestMultiControlPlane/serial/AddSecondaryNode 74.2
186 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.73
190 TestJSONOutput/start/Command 60.05
191 TestJSONOutput/start/Audit 0
193 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/pause/Command 0.74
197 TestJSONOutput/pause/Audit 0
199 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/unpause/Command 0.65
203 TestJSONOutput/unpause/Audit 0
205 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
208 TestJSONOutput/stop/Command 5.91
209 TestJSONOutput/stop/Audit 0
211 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
212 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
213 TestErrorJSONOutput 0.23
215 TestKicCustomNetwork/create_custom_network 37.1
216 TestKicCustomNetwork/use_default_bridge_network 33.31
217 TestKicExistingNetwork 33.3
218 TestKicCustomSubnet 37.49
219 TestKicStaticIP 33.13
220 TestMainNoArgs 0.05
221 TestMinikubeProfile 68.3
224 TestMountStart/serial/StartWithMountFirst 6.53
225 TestMountStart/serial/VerifyMountFirst 0.26
226 TestMountStart/serial/StartWithMountSecond 6.45
227 TestMountStart/serial/VerifyMountSecond 0.26
228 TestMountStart/serial/DeleteFirst 1.59
229 TestMountStart/serial/VerifyMountPostDelete 0.25
230 TestMountStart/serial/Stop 1.2
231 TestMountStart/serial/RestartStopped 8.29
232 TestMountStart/serial/VerifyMountPostStop 0.26
235 TestMultiNode/serial/FreshStart2Nodes 85.22
236 TestMultiNode/serial/DeployApp2Nodes 4.79
237 TestMultiNode/serial/PingHostFrom2Pods 1.16
238 TestMultiNode/serial/AddNode 28.58
239 TestMultiNode/serial/MultiNodeLabels 0.09
240 TestMultiNode/serial/ProfileList 0.33
241 TestMultiNode/serial/CopyFile 9.73
242 TestMultiNode/serial/StopNode 2.23
243 TestMultiNode/serial/StartAfterStop 9.88
244 TestMultiNode/serial/RestartKeepsNodes 88.11
245 TestMultiNode/serial/DeleteNode 5.22
246 TestMultiNode/serial/StopMultiNode 23.85
247 TestMultiNode/serial/RestartMultiNode 57.37
248 TestMultiNode/serial/ValidateNameConflict 35.31
253 TestPreload 130.81
255 TestScheduledStopUnix 106.55
258 TestInsufficientStorage 10.96
259 TestRunningBinaryUpgrade 78.79
261 TestKubernetesUpgrade 385.25
262 TestMissingContainerUpgrade 146.11
264 TestPause/serial/Start 73.48
266 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
267 TestNoKubernetes/serial/StartWithK8s 46.66
268 TestNoKubernetes/serial/StartWithStopK8s 16.77
269 TestNoKubernetes/serial/Start 6.11
270 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
271 TestNoKubernetes/serial/ProfileList 1.03
272 TestNoKubernetes/serial/Stop 1.24
273 TestNoKubernetes/serial/StartNoArgs 6.75
275 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.24
276 TestStoppedBinaryUpgrade/Setup 1.21
277 TestStoppedBinaryUpgrade/Upgrade 101.8
278 TestStoppedBinaryUpgrade/MinikubeLogs 1.39
293 TestNetworkPlugins/group/false 4.06
298 TestStartStop/group/old-k8s-version/serial/FirstStart 177.85
299 TestStartStop/group/old-k8s-version/serial/DeployApp 9.63
300 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.46
302 TestStartStop/group/no-preload/serial/FirstStart 68.4
303 TestStartStop/group/old-k8s-version/serial/Stop 13.26
304 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.22
305 TestStartStop/group/old-k8s-version/serial/SecondStart 153.65
306 TestStartStop/group/no-preload/serial/DeployApp 9.41
307 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.1
308 TestStartStop/group/no-preload/serial/Stop 11.97
309 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.17
310 TestStartStop/group/no-preload/serial/SecondStart 280.24
311 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
312 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
313 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.26
314 TestStartStop/group/old-k8s-version/serial/Pause 2.92
316 TestStartStop/group/embed-certs/serial/FirstStart 59.21
317 TestStartStop/group/embed-certs/serial/DeployApp 8.35
318 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.1
319 TestStartStop/group/embed-certs/serial/Stop 11.97
320 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
321 TestStartStop/group/embed-certs/serial/SecondStart 266.25
322 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
323 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.11
324 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
325 TestStartStop/group/no-preload/serial/Pause 3.09
327 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 58.28
328 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.37
329 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.12
330 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.09
331 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
332 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 268.12
333 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
334 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
335 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
336 TestStartStop/group/embed-certs/serial/Pause 2.99
338 TestStartStop/group/newest-cni/serial/FirstStart 35.3
339 TestStartStop/group/newest-cni/serial/DeployApp 0
340 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.28
341 TestStartStop/group/newest-cni/serial/Stop 1.28
342 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
343 TestStartStop/group/newest-cni/serial/SecondStart 15.07
344 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
345 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
346 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.34
347 TestStartStop/group/newest-cni/serial/Pause 3.36
348 TestNetworkPlugins/group/auto/Start 61.07
349 TestNetworkPlugins/group/auto/KubeletFlags 0.33
350 TestNetworkPlugins/group/auto/NetCatPod 12.3
351 TestNetworkPlugins/group/auto/DNS 0.22
352 TestNetworkPlugins/group/auto/Localhost 0.17
353 TestNetworkPlugins/group/auto/HairPin 0.16
354 TestNetworkPlugins/group/kindnet/Start 60.54
355 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
356 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.1
357 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.22
358 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.11
359 TestNetworkPlugins/group/calico/Start 75.13
360 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
361 TestNetworkPlugins/group/kindnet/KubeletFlags 0.38
362 TestNetworkPlugins/group/kindnet/NetCatPod 14.33
363 TestNetworkPlugins/group/kindnet/DNS 0.23
364 TestNetworkPlugins/group/kindnet/Localhost 0.21
365 TestNetworkPlugins/group/kindnet/HairPin 0.19
366 TestNetworkPlugins/group/custom-flannel/Start 72.84
367 TestNetworkPlugins/group/calico/ControllerPod 6.01
368 TestNetworkPlugins/group/calico/KubeletFlags 0.37
369 TestNetworkPlugins/group/calico/NetCatPod 11.37
370 TestNetworkPlugins/group/calico/DNS 0.27
371 TestNetworkPlugins/group/calico/Localhost 0.28
372 TestNetworkPlugins/group/calico/HairPin 0.24
373 TestNetworkPlugins/group/enable-default-cni/Start 94.03
374 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.35
375 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.68
376 TestNetworkPlugins/group/custom-flannel/DNS 0.23
377 TestNetworkPlugins/group/custom-flannel/Localhost 0.21
378 TestNetworkPlugins/group/custom-flannel/HairPin 0.43
379 TestNetworkPlugins/group/flannel/Start 65.77
380 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.38
381 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.45
382 TestNetworkPlugins/group/enable-default-cni/DNS 0.2
383 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
384 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
385 TestNetworkPlugins/group/flannel/ControllerPod 6.01
386 TestNetworkPlugins/group/flannel/KubeletFlags 0.34
387 TestNetworkPlugins/group/flannel/NetCatPod 12.35
388 TestNetworkPlugins/group/bridge/Start 51.41
389 TestNetworkPlugins/group/flannel/DNS 0.22
390 TestNetworkPlugins/group/flannel/Localhost 0.18
391 TestNetworkPlugins/group/flannel/HairPin 0.22
392 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
393 TestNetworkPlugins/group/bridge/NetCatPod 10.28
394 TestNetworkPlugins/group/bridge/DNS 0.19
395 TestNetworkPlugins/group/bridge/Localhost 0.15
396 TestNetworkPlugins/group/bridge/HairPin 0.16
x
+
TestDownloadOnly/v1.20.0/json-events (8.89s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-438325 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-438325 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (8.893162474s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (8.89s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-438325
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-438325: exit status 85 (72.4677ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-438325 | jenkins | v1.33.1 | 23 Jul 24 14:27 UTC |          |
	|         | -p download-only-438325        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/23 14:27:00
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0723 14:27:00.294002 3323085 out.go:291] Setting OutFile to fd 1 ...
	I0723 14:27:00.294292 3323085 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 14:27:00.294341 3323085 out.go:304] Setting ErrFile to fd 2...
	I0723 14:27:00.294372 3323085 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 14:27:00.294712 3323085 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19319-3317687/.minikube/bin
	W0723 14:27:00.294927 3323085 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19319-3317687/.minikube/config/config.json: open /home/jenkins/minikube-integration/19319-3317687/.minikube/config/config.json: no such file or directory
	I0723 14:27:00.295586 3323085 out.go:298] Setting JSON to true
	I0723 14:27:00.296677 3323085 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":83367,"bootTime":1721661454,"procs":163,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0723 14:27:00.296822 3323085 start.go:139] virtualization:  
	I0723 14:27:00.300200 3323085 out.go:97] [download-only-438325] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	W0723 14:27:00.300439 3323085 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19319-3317687/.minikube/cache/preloaded-tarball: no such file or directory
	I0723 14:27:00.300556 3323085 notify.go:220] Checking for updates...
	I0723 14:27:00.303982 3323085 out.go:169] MINIKUBE_LOCATION=19319
	I0723 14:27:00.306023 3323085 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0723 14:27:00.308159 3323085 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19319-3317687/kubeconfig
	I0723 14:27:00.310208 3323085 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19319-3317687/.minikube
	I0723 14:27:00.312054 3323085 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0723 14:27:00.316087 3323085 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0723 14:27:00.316428 3323085 driver.go:392] Setting default libvirt URI to qemu:///system
	I0723 14:27:00.339076 3323085 docker.go:123] docker version: linux-27.1.0:Docker Engine - Community
	I0723 14:27:00.339183 3323085 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0723 14:27:00.411505 3323085 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:51 SystemTime:2024-07-23 14:27:00.401158332 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1065-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.0]] Warnings:<nil>}}
	I0723 14:27:00.411628 3323085 docker.go:307] overlay module found
	I0723 14:27:00.413719 3323085 out.go:97] Using the docker driver based on user configuration
	I0723 14:27:00.413763 3323085 start.go:297] selected driver: docker
	I0723 14:27:00.413771 3323085 start.go:901] validating driver "docker" against <nil>
	I0723 14:27:00.413901 3323085 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0723 14:27:00.476123 3323085 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:51 SystemTime:2024-07-23 14:27:00.466961155 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1065-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.0]] Warnings:<nil>}}
	I0723 14:27:00.476294 3323085 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0723 14:27:00.476576 3323085 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0723 14:27:00.476737 3323085 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0723 14:27:00.478849 3323085 out.go:169] Using Docker driver with root privileges
	I0723 14:27:00.480723 3323085 cni.go:84] Creating CNI manager for ""
	I0723 14:27:00.480743 3323085 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0723 14:27:00.480753 3323085 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0723 14:27:00.480867 3323085 start.go:340] cluster config:
	{Name:download-only-438325 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-438325 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 14:27:00.483002 3323085 out.go:97] Starting "download-only-438325" primary control-plane node in "download-only-438325" cluster
	I0723 14:27:00.483026 3323085 cache.go:121] Beginning downloading kic base image for docker with crio
	I0723 14:27:00.484747 3323085 out.go:97] Pulling base image v0.0.44-1721687125-19319 ...
	I0723 14:27:00.484788 3323085 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0723 14:27:00.484879 3323085 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae in local docker daemon
	I0723 14:27:00.500878 3323085 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae to local cache
	I0723 14:27:00.501609 3323085 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae in local cache directory
	I0723 14:27:00.501717 3323085 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae to local cache
	I0723 14:27:00.611966 3323085 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I0723 14:27:00.612039 3323085 cache.go:56] Caching tarball of preloaded images
	I0723 14:27:00.612244 3323085 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0723 14:27:00.614557 3323085 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0723 14:27:00.614577 3323085 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I0723 14:27:00.708634 3323085 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:59cd2ef07b53f039bfd1761b921f2a02 -> /home/jenkins/minikube-integration/19319-3317687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-438325 host does not exist
	  To start a cluster, run: "minikube start -p download-only-438325"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-438325
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (9.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-292108 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-292108 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=crio --driver=docker  --container-runtime=crio: (9.139720868s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (9.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-292108
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-292108: exit status 85 (70.039206ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-438325 | jenkins | v1.33.1 | 23 Jul 24 14:27 UTC |                     |
	|         | -p download-only-438325        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 23 Jul 24 14:27 UTC | 23 Jul 24 14:27 UTC |
	| delete  | -p download-only-438325        | download-only-438325 | jenkins | v1.33.1 | 23 Jul 24 14:27 UTC | 23 Jul 24 14:27 UTC |
	| start   | -o=json --download-only        | download-only-292108 | jenkins | v1.33.1 | 23 Jul 24 14:27 UTC |                     |
	|         | -p download-only-292108        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/23 14:27:09
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0723 14:27:09.588892 3323287 out.go:291] Setting OutFile to fd 1 ...
	I0723 14:27:09.589039 3323287 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 14:27:09.589063 3323287 out.go:304] Setting ErrFile to fd 2...
	I0723 14:27:09.589075 3323287 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 14:27:09.589344 3323287 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19319-3317687/.minikube/bin
	I0723 14:27:09.589751 3323287 out.go:298] Setting JSON to true
	I0723 14:27:09.590647 3323287 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":83376,"bootTime":1721661454,"procs":159,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0723 14:27:09.590719 3323287 start.go:139] virtualization:  
	I0723 14:27:09.593778 3323287 out.go:97] [download-only-292108] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0723 14:27:09.593975 3323287 notify.go:220] Checking for updates...
	I0723 14:27:09.596149 3323287 out.go:169] MINIKUBE_LOCATION=19319
	I0723 14:27:09.597990 3323287 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0723 14:27:09.599841 3323287 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19319-3317687/kubeconfig
	I0723 14:27:09.601590 3323287 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19319-3317687/.minikube
	I0723 14:27:09.603185 3323287 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0723 14:27:09.606865 3323287 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0723 14:27:09.607197 3323287 driver.go:392] Setting default libvirt URI to qemu:///system
	I0723 14:27:09.628368 3323287 docker.go:123] docker version: linux-27.1.0:Docker Engine - Community
	I0723 14:27:09.628493 3323287 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0723 14:27:09.689387 3323287 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:45 SystemTime:2024-07-23 14:27:09.67909938 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1065-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.0]] Warnings:<nil>}}
	I0723 14:27:09.689497 3323287 docker.go:307] overlay module found
	I0723 14:27:09.691362 3323287 out.go:97] Using the docker driver based on user configuration
	I0723 14:27:09.691406 3323287 start.go:297] selected driver: docker
	I0723 14:27:09.691413 3323287 start.go:901] validating driver "docker" against <nil>
	I0723 14:27:09.691519 3323287 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0723 14:27:09.742169 3323287 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:45 SystemTime:2024-07-23 14:27:09.733455842 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1065-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.0]] Warnings:<nil>}}
	I0723 14:27:09.742338 3323287 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0723 14:27:09.742622 3323287 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0723 14:27:09.742779 3323287 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0723 14:27:09.744716 3323287 out.go:169] Using Docker driver with root privileges
	I0723 14:27:09.746339 3323287 cni.go:84] Creating CNI manager for ""
	I0723 14:27:09.746361 3323287 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0723 14:27:09.746374 3323287 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0723 14:27:09.746469 3323287 start.go:340] cluster config:
	{Name:download-only-292108 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-292108 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 14:27:09.748139 3323287 out.go:97] Starting "download-only-292108" primary control-plane node in "download-only-292108" cluster
	I0723 14:27:09.748166 3323287 cache.go:121] Beginning downloading kic base image for docker with crio
	I0723 14:27:09.749805 3323287 out.go:97] Pulling base image v0.0.44-1721687125-19319 ...
	I0723 14:27:09.749832 3323287 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0723 14:27:09.750007 3323287 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae in local docker daemon
	I0723 14:27:09.765019 3323287 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae to local cache
	I0723 14:27:09.765152 3323287 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae in local cache directory
	I0723 14:27:09.765177 3323287 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae in local cache directory, skipping pull
	I0723 14:27:09.765186 3323287 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae exists in cache, skipping pull
	I0723 14:27:09.765194 3323287 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae as a tarball
	I0723 14:27:09.828709 3323287 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4
	I0723 14:27:09.828734 3323287 cache.go:56] Caching tarball of preloaded images
	I0723 14:27:09.829455 3323287 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0723 14:27:09.831309 3323287 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0723 14:27:09.831328 3323287 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4 ...
	I0723 14:27:09.937071 3323287 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4?checksum=md5:bace9a3612be7d31e4d3c3d446951ced -> /home/jenkins/minikube-integration/19319-3317687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-292108 host does not exist
	  To start a cluster, run: "minikube start -p download-only-292108"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-292108
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/json-events (7.62s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-547065 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-547065 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (7.623094377s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/json-events (7.62s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-547065
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-547065: exit status 85 (71.999312ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-438325 | jenkins | v1.33.1 | 23 Jul 24 14:27 UTC |                     |
	|         | -p download-only-438325             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=docker                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 23 Jul 24 14:27 UTC | 23 Jul 24 14:27 UTC |
	| delete  | -p download-only-438325             | download-only-438325 | jenkins | v1.33.1 | 23 Jul 24 14:27 UTC | 23 Jul 24 14:27 UTC |
	| start   | -o=json --download-only             | download-only-292108 | jenkins | v1.33.1 | 23 Jul 24 14:27 UTC |                     |
	|         | -p download-only-292108             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3        |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=docker                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 23 Jul 24 14:27 UTC | 23 Jul 24 14:27 UTC |
	| delete  | -p download-only-292108             | download-only-292108 | jenkins | v1.33.1 | 23 Jul 24 14:27 UTC | 23 Jul 24 14:27 UTC |
	| start   | -o=json --download-only             | download-only-547065 | jenkins | v1.33.1 | 23 Jul 24 14:27 UTC |                     |
	|         | -p download-only-547065             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=docker                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/23 14:27:19
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0723 14:27:19.122689 3323487 out.go:291] Setting OutFile to fd 1 ...
	I0723 14:27:19.122825 3323487 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 14:27:19.122836 3323487 out.go:304] Setting ErrFile to fd 2...
	I0723 14:27:19.122841 3323487 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 14:27:19.123079 3323487 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19319-3317687/.minikube/bin
	I0723 14:27:19.123480 3323487 out.go:298] Setting JSON to true
	I0723 14:27:19.124326 3323487 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":83386,"bootTime":1721661454,"procs":154,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0723 14:27:19.124391 3323487 start.go:139] virtualization:  
	I0723 14:27:19.126787 3323487 out.go:97] [download-only-547065] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0723 14:27:19.127019 3323487 notify.go:220] Checking for updates...
	I0723 14:27:19.129028 3323487 out.go:169] MINIKUBE_LOCATION=19319
	I0723 14:27:19.131183 3323487 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0723 14:27:19.133202 3323487 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19319-3317687/kubeconfig
	I0723 14:27:19.135008 3323487 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19319-3317687/.minikube
	I0723 14:27:19.137306 3323487 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0723 14:27:19.140862 3323487 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0723 14:27:19.141136 3323487 driver.go:392] Setting default libvirt URI to qemu:///system
	I0723 14:27:19.161189 3323487 docker.go:123] docker version: linux-27.1.0:Docker Engine - Community
	I0723 14:27:19.161303 3323487 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0723 14:27:19.224629 3323487 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-07-23 14:27:19.21473612 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1065-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.0]] Warnings:<nil>}}
	I0723 14:27:19.224740 3323487 docker.go:307] overlay module found
	I0723 14:27:19.227019 3323487 out.go:97] Using the docker driver based on user configuration
	I0723 14:27:19.227048 3323487 start.go:297] selected driver: docker
	I0723 14:27:19.227055 3323487 start.go:901] validating driver "docker" against <nil>
	I0723 14:27:19.227170 3323487 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0723 14:27:19.290273 3323487 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-07-23 14:27:19.281153596 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1065-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.0]] Warnings:<nil>}}
	I0723 14:27:19.290467 3323487 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0723 14:27:19.290853 3323487 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0723 14:27:19.291020 3323487 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0723 14:27:19.293425 3323487 out.go:169] Using Docker driver with root privileges
	I0723 14:27:19.295427 3323487 cni.go:84] Creating CNI manager for ""
	I0723 14:27:19.295450 3323487 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0723 14:27:19.295462 3323487 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0723 14:27:19.295545 3323487 start.go:340] cluster config:
	{Name:download-only-547065 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:download-only-547065 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 14:27:19.297376 3323487 out.go:97] Starting "download-only-547065" primary control-plane node in "download-only-547065" cluster
	I0723 14:27:19.297394 3323487 cache.go:121] Beginning downloading kic base image for docker with crio
	I0723 14:27:19.299302 3323487 out.go:97] Pulling base image v0.0.44-1721687125-19319 ...
	I0723 14:27:19.299337 3323487 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0723 14:27:19.299384 3323487 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae in local docker daemon
	I0723 14:27:19.313864 3323487 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae to local cache
	I0723 14:27:19.314013 3323487 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae in local cache directory
	I0723 14:27:19.314033 3323487 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae in local cache directory, skipping pull
	I0723 14:27:19.314038 3323487 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae exists in cache, skipping pull
	I0723 14:27:19.314046 3323487 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae as a tarball
	I0723 14:27:19.350626 3323487 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I0723 14:27:19.350671 3323487 cache.go:56] Caching tarball of preloaded images
	I0723 14:27:19.351500 3323487 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0723 14:27:19.353640 3323487 out.go:97] Downloading Kubernetes v1.31.0-beta.0 preload ...
	I0723 14:27:19.353670 3323487 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-arm64.tar.lz4 ...
	I0723 14:27:19.459614 3323487 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:70b5971c257ae4defe1f5d041a04e29c -> /home/jenkins/minikube-integration/19319-3317687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-547065 host does not exist
	  To start a cluster, run: "minikube start -p download-only-547065"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-547065
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.53s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-953180 --alsologtostderr --binary-mirror http://127.0.0.1:39823 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-953180" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-953180
--- PASS: TestBinaryMirror (0.53s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1029: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-140056
addons_test.go:1029: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-140056: exit status 85 (59.661397ms)

                                                
                                                
-- stdout --
	* Profile "addons-140056" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-140056"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1040: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-140056
addons_test.go:1040: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-140056: exit status 85 (66.19577ms)

                                                
                                                
-- stdout --
	* Profile "addons-140056" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-140056"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (221.34s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-140056 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-140056 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns: (3m41.339963925s)
--- PASS: TestAddons/Setup (221.34s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.48s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 48.536874ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-656c9c8d9c-pjd4j" [1859702d-c9a6-460d-81c6-102ef98b706b] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.007270315s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-g8j86" [9477b7ff-d5fd-48f9-ad75-25e57440ab34] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.00514739s
addons_test.go:342: (dbg) Run:  kubectl --context addons-140056 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-140056 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-140056 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.410530505s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-arm64 -p addons-140056 ip
2024/07/23 14:31:24 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-arm64 -p addons-140056 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.48s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.77s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-896fh" [c203407c-54b8-4dff-8002-5661f1205ce1] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004146528s
addons_test.go:843: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-140056
addons_test.go:843: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-140056: (5.763766688s)
--- PASS: TestAddons/parallel/InspektorGadget (11.77s)

                                                
                                    
x
+
TestAddons/parallel/CSI (33.03s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:563: csi-hostpath-driver pods stabilized in 9.217235ms
addons_test.go:566: (dbg) Run:  kubectl --context addons-140056 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:571: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-140056 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-140056 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:576: (dbg) Run:  kubectl --context addons-140056 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:581: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [fcca4483-1e3e-4f89-92b8-cfcec253ed30] Pending
helpers_test.go:344: "task-pv-pod" [fcca4483-1e3e-4f89-92b8-cfcec253ed30] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [fcca4483-1e3e-4f89-92b8-cfcec253ed30] Running
addons_test.go:581: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.003526273s
addons_test.go:586: (dbg) Run:  kubectl --context addons-140056 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:591: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-140056 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-140056 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:596: (dbg) Run:  kubectl --context addons-140056 delete pod task-pv-pod
addons_test.go:602: (dbg) Run:  kubectl --context addons-140056 delete pvc hpvc
addons_test.go:608: (dbg) Run:  kubectl --context addons-140056 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:613: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-140056 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-140056 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-140056 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:618: (dbg) Run:  kubectl --context addons-140056 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:623: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [bc752c0c-0c27-4c20-9441-34384f5f26da] Pending
helpers_test.go:344: "task-pv-pod-restore" [bc752c0c-0c27-4c20-9441-34384f5f26da] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [bc752c0c-0c27-4c20-9441-34384f5f26da] Running
addons_test.go:623: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003356315s
addons_test.go:628: (dbg) Run:  kubectl --context addons-140056 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Run:  kubectl --context addons-140056 delete pvc hpvc-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-140056 delete volumesnapshot new-snapshot-demo
addons_test.go:640: (dbg) Run:  out/minikube-linux-arm64 -p addons-140056 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:640: (dbg) Done: out/minikube-linux-arm64 -p addons-140056 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.709555572s)
addons_test.go:644: (dbg) Run:  out/minikube-linux-arm64 -p addons-140056 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (33.03s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (10.96s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:826: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-140056 --alsologtostderr -v=1
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7867546754-xc2vd" [fe3de941-f71a-4984-9777-3ae913ad0581] Pending
helpers_test.go:344: "headlamp-7867546754-xc2vd" [fe3de941-f71a-4984-9777-3ae913ad0581] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-xc2vd" [fe3de941-f71a-4984-9777-3ae913ad0581] Running
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.003450766s
--- PASS: TestAddons/parallel/Headlamp (10.96s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.58s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-rgb74" [f6eb9173-988e-4885-9fd7-4a8837cda196] Running
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.00348653s
addons_test.go:862: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-140056
--- PASS: TestAddons/parallel/CloudSpanner (6.58s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (66.31s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:974: (dbg) Run:  kubectl --context addons-140056 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:980: (dbg) Run:  kubectl --context addons-140056 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:984: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-140056 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-140056 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-140056 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-140056 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-140056 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-140056 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [82a8ab90-594d-452d-8390-da3f4fbb869b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [82a8ab90-594d-452d-8390-da3f4fbb869b] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [82a8ab90-594d-452d-8390-da3f4fbb869b] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 17.00427133s
addons_test.go:992: (dbg) Run:  kubectl --context addons-140056 get pvc test-pvc -o=json
addons_test.go:1001: (dbg) Run:  out/minikube-linux-arm64 -p addons-140056 ssh "cat /opt/local-path-provisioner/pvc-4719a5dc-20ce-42e3-9843-cd46009709ea_default_test-pvc/file1"
addons_test.go:1013: (dbg) Run:  kubectl --context addons-140056 delete pod test-local-path
addons_test.go:1017: (dbg) Run:  kubectl --context addons-140056 delete pvc test-pvc
addons_test.go:1021: (dbg) Run:  out/minikube-linux-arm64 -p addons-140056 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1021: (dbg) Done: out/minikube-linux-arm64 -p addons-140056 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.254661966s)
--- PASS: TestAddons/parallel/LocalPath (66.31s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.53s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-rhfcp" [724260a7-4c1d-4daf-a392-8f7cf7efaa06] Running
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.00425851s
addons_test.go:1056: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-140056
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.53s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-jkkhq" [4c98d264-2f76-45bb-bd36-dfa0ff846c01] Running
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.00394057s
--- PASS: TestAddons/parallel/Yakd (5.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:652: (dbg) Run:  kubectl --context addons-140056 create ns new-namespace
addons_test.go:666: (dbg) Run:  kubectl --context addons-140056 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.16s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-140056
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-140056: (11.889410311s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-140056
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-140056
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-140056
--- PASS: TestAddons/StoppedEnableDisable (12.16s)

                                                
                                    
x
+
TestCertOptions (40.9s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-356103 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
E0723 15:21:10.025982 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/client.crt: no such file or directory
E0723 15:21:17.138843 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/functional-054469/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-356103 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (38.236759567s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-356103 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-356103 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-356103 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-356103" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-356103
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-356103: (1.985442633s)
--- PASS: TestCertOptions (40.90s)

                                                
                                    
x
+
TestCertExpiration (247.45s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-519314 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-519314 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (36.702715128s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-519314 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-519314 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (28.564300695s)
helpers_test.go:175: Cleaning up "cert-expiration-519314" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-519314
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-519314: (2.183582366s)
--- PASS: TestCertExpiration (247.45s)

                                                
                                    
x
+
TestForceSystemdFlag (41.63s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-246914 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-246914 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (38.680091138s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-246914 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-246914" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-246914
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-246914: (2.644129919s)
--- PASS: TestForceSystemdFlag (41.63s)

                                                
                                    
x
+
TestForceSystemdEnv (39.38s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-096754 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-096754 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (36.925440575s)
helpers_test.go:175: Cleaning up "force-systemd-env-096754" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-096754
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-096754: (2.454570246s)
--- PASS: TestForceSystemdEnv (39.38s)

                                                
                                    
x
+
TestErrorSpam/setup (30.44s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-755458 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-755458 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-755458 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-755458 --driver=docker  --container-runtime=crio: (30.438610001s)
--- PASS: TestErrorSpam/setup (30.44s)

                                                
                                    
x
+
TestErrorSpam/start (0.72s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-755458 --log_dir /tmp/nospam-755458 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-755458 --log_dir /tmp/nospam-755458 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-755458 --log_dir /tmp/nospam-755458 start --dry-run
--- PASS: TestErrorSpam/start (0.72s)

                                                
                                    
x
+
TestErrorSpam/status (0.96s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-755458 --log_dir /tmp/nospam-755458 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-755458 --log_dir /tmp/nospam-755458 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-755458 --log_dir /tmp/nospam-755458 status
--- PASS: TestErrorSpam/status (0.96s)

                                                
                                    
x
+
TestErrorSpam/pause (1.63s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-755458 --log_dir /tmp/nospam-755458 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-755458 --log_dir /tmp/nospam-755458 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-755458 --log_dir /tmp/nospam-755458 pause
--- PASS: TestErrorSpam/pause (1.63s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.74s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-755458 --log_dir /tmp/nospam-755458 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-755458 --log_dir /tmp/nospam-755458 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-755458 --log_dir /tmp/nospam-755458 unpause
--- PASS: TestErrorSpam/unpause (1.74s)

                                                
                                    
x
+
TestErrorSpam/stop (1.42s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-755458 --log_dir /tmp/nospam-755458 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-755458 --log_dir /tmp/nospam-755458 stop: (1.238416102s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-755458 --log_dir /tmp/nospam-755458 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-755458 --log_dir /tmp/nospam-755458 stop
--- PASS: TestErrorSpam/stop (1.42s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/19319-3317687/.minikube/files/etc/test/nested/copy/3323080/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (57.21s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-054469 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-054469 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (57.210655834s)
--- PASS: TestFunctional/serial/StartWithProxy (57.21s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (29.02s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-054469 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-054469 --alsologtostderr -v=8: (29.017949621s)
functional_test.go:659: soft start took 29.024039755s for "functional-054469" cluster.
--- PASS: TestFunctional/serial/SoftStart (29.02s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.08s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-054469 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-054469 cache add registry.k8s.io/pause:3.1: (1.446545756s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-054469 cache add registry.k8s.io/pause:3.3: (1.395154514s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-054469 cache add registry.k8s.io/pause:latest: (1.317206656s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.16s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-054469 /tmp/TestFunctionalserialCacheCmdcacheadd_local3455593830/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 cache add minikube-local-cache-test:functional-054469
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 cache delete minikube-local-cache-test:functional-054469
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-054469
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.94s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-054469 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (284.007384ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-arm64 -p functional-054469 cache reload: (1.028747588s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.94s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 kubectl -- --context functional-054469 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-054469 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (36.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-054469 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-054469 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (36.099602906s)
functional_test.go:757: restart took 36.099706464s for "functional-054469" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (36.10s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-054469 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.73s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-054469 logs: (1.733707219s)
--- PASS: TestFunctional/serial/LogsCmd (1.73s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.81s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 logs --file /tmp/TestFunctionalserialLogsFileCmd3015935952/001/logs.txt
E0723 14:41:10.026677 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/client.crt: no such file or directory
E0723 14:41:10.033871 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/client.crt: no such file or directory
E0723 14:41:10.044182 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/client.crt: no such file or directory
E0723 14:41:10.064610 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/client.crt: no such file or directory
E0723 14:41:10.104878 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/client.crt: no such file or directory
E0723 14:41:10.185139 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/client.crt: no such file or directory
E0723 14:41:10.345702 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/client.crt: no such file or directory
E0723 14:41:10.666221 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/client.crt: no such file or directory
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-054469 logs --file /tmp/TestFunctionalserialLogsFileCmd3015935952/001/logs.txt: (1.811457941s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.81s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.25s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-054469 apply -f testdata/invalidsvc.yaml
E0723 14:41:11.306406 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/client.crt: no such file or directory
E0723 14:41:12.586719 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/client.crt: no such file or directory
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-054469
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-054469: exit status 115 (558.263302ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30928 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-054469 delete -f testdata/invalidsvc.yaml
E0723 14:41:15.147095 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/client.crt: no such file or directory
--- PASS: TestFunctional/serial/InvalidService (4.25s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-054469 config get cpus: exit status 14 (59.887755ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-054469 config get cpus: exit status 14 (76.218899ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-054469 --alsologtostderr -v=1]
2024/07/23 14:42:24 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-054469 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 3351183: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.19s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-054469 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-054469 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (176.043972ms)

                                                
                                                
-- stdout --
	* [functional-054469] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19319
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19319-3317687/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19319-3317687/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 14:42:16.526426 3350946 out.go:291] Setting OutFile to fd 1 ...
	I0723 14:42:16.526584 3350946 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 14:42:16.526595 3350946 out.go:304] Setting ErrFile to fd 2...
	I0723 14:42:16.526600 3350946 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 14:42:16.526834 3350946 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19319-3317687/.minikube/bin
	I0723 14:42:16.527187 3350946 out.go:298] Setting JSON to false
	I0723 14:42:16.528131 3350946 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":84283,"bootTime":1721661454,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0723 14:42:16.528204 3350946 start.go:139] virtualization:  
	I0723 14:42:16.530626 3350946 out.go:177] * [functional-054469] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0723 14:42:16.532749 3350946 out.go:177]   - MINIKUBE_LOCATION=19319
	I0723 14:42:16.532881 3350946 notify.go:220] Checking for updates...
	I0723 14:42:16.536457 3350946 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0723 14:42:16.538187 3350946 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19319-3317687/kubeconfig
	I0723 14:42:16.539727 3350946 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19319-3317687/.minikube
	I0723 14:42:16.541376 3350946 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0723 14:42:16.543158 3350946 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0723 14:42:16.545257 3350946 config.go:182] Loaded profile config "functional-054469": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 14:42:16.545802 3350946 driver.go:392] Setting default libvirt URI to qemu:///system
	I0723 14:42:16.568512 3350946 docker.go:123] docker version: linux-27.1.0:Docker Engine - Community
	I0723 14:42:16.568636 3350946 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0723 14:42:16.634595 3350946 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-07-23 14:42:16.624577123 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1065-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.0]] Warnings:<nil>}}
	I0723 14:42:16.634708 3350946 docker.go:307] overlay module found
	I0723 14:42:16.636907 3350946 out.go:177] * Using the docker driver based on existing profile
	I0723 14:42:16.638649 3350946 start.go:297] selected driver: docker
	I0723 14:42:16.638672 3350946 start.go:901] validating driver "docker" against &{Name:functional-054469 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-054469 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 14:42:16.638791 3350946 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0723 14:42:16.641306 3350946 out.go:177] 
	W0723 14:42:16.648839 3350946 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0723 14:42:16.651643 3350946 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-054469 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-054469 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-054469 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (189.200276ms)

                                                
                                                
-- stdout --
	* [functional-054469] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19319
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19319-3317687/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19319-3317687/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 14:42:16.351127 3350904 out.go:291] Setting OutFile to fd 1 ...
	I0723 14:42:16.351314 3350904 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 14:42:16.351351 3350904 out.go:304] Setting ErrFile to fd 2...
	I0723 14:42:16.351371 3350904 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 14:42:16.353429 3350904 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19319-3317687/.minikube/bin
	I0723 14:42:16.353947 3350904 out.go:298] Setting JSON to false
	I0723 14:42:16.354946 3350904 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":84283,"bootTime":1721661454,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0723 14:42:16.355052 3350904 start.go:139] virtualization:  
	I0723 14:42:16.357403 3350904 out.go:177] * [functional-054469] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	I0723 14:42:16.359545 3350904 out.go:177]   - MINIKUBE_LOCATION=19319
	I0723 14:42:16.359681 3350904 notify.go:220] Checking for updates...
	I0723 14:42:16.362595 3350904 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0723 14:42:16.364202 3350904 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19319-3317687/kubeconfig
	I0723 14:42:16.365976 3350904 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19319-3317687/.minikube
	I0723 14:42:16.367744 3350904 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0723 14:42:16.369524 3350904 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0723 14:42:16.371800 3350904 config.go:182] Loaded profile config "functional-054469": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 14:42:16.372395 3350904 driver.go:392] Setting default libvirt URI to qemu:///system
	I0723 14:42:16.395021 3350904 docker.go:123] docker version: linux-27.1.0:Docker Engine - Community
	I0723 14:42:16.395137 3350904 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0723 14:42:16.465058 3350904 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-07-23 14:42:16.455451402 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1065-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.0]] Warnings:<nil>}}
	I0723 14:42:16.465171 3350904 docker.go:307] overlay module found
	I0723 14:42:16.467152 3350904 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0723 14:42:16.468621 3350904 start.go:297] selected driver: docker
	I0723 14:42:16.468639 3350904 start.go:901] validating driver "docker" against &{Name:functional-054469 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-054469 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0723 14:42:16.468730 3350904 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0723 14:42:16.470991 3350904 out.go:177] 
	W0723 14:42:16.472694 3350904 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0723 14:42:16.474274 3350904 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-054469 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-054469 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-6f49f58cd5-llhcb" [0688642b-31c6-4a11-8660-2bc85c1f28c4] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-6f49f58cd5-llhcb" [0688642b-31c6-4a11-8660-2bc85c1f28c4] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.003421248s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.49.2:32306
functional_test.go:1671: http://192.168.49.2:32306: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-6f49f58cd5-llhcb

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32306
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.55s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 ssh -n functional-054469 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 cp functional-054469:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd636732163/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 ssh -n functional-054469 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 ssh -n functional-054469 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.29s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/3323080/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 ssh "sudo cat /etc/test/nested/copy/3323080/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/3323080.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 ssh "sudo cat /etc/ssl/certs/3323080.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/3323080.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 ssh "sudo cat /usr/share/ca-certificates/3323080.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/33230802.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 ssh "sudo cat /etc/ssl/certs/33230802.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/33230802.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 ssh "sudo cat /usr/share/ca-certificates/33230802.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.58s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-054469 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-054469 ssh "sudo systemctl is-active docker": exit status 1 (256.811471ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-054469 ssh "sudo systemctl is-active containerd": exit status 1 (251.838949ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-054469 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-054469 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-054469 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-054469 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 3347893: os: process already finished
helpers_test.go:502: unable to terminate pid 3347699: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-054469 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-054469 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [b47b1e3c-bcfa-434b-ba6a-6f99fbfb8152] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [b47b1e3c-bcfa-434b-ba6a-6f99fbfb8152] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.003335852s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.49s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-054469 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.101.11.2 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-054469 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-054469 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-054469 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-65f5d5cc78-jc8zb" [1d26d912-6997-4d3a-b0d6-b0014c04b538] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-65f5d5cc78-jc8zb" [1d26d912-6997-4d3a-b0d6-b0014c04b538] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.003462026s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.20s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 service list -o json
functional_test.go:1490: Took "499.249964ms" to run "out/minikube-linux-arm64 -p functional-054469 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.49.2:30671
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.49.2:30671
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1311: Took "319.830712ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1325: Took "64.960339ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1362: Took "323.890639ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1375: Took "72.364203ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (21.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-054469 /tmp/TestFunctionalparallelMountCmdany-port3844426748/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1721745710049995047" to /tmp/TestFunctionalparallelMountCmdany-port3844426748/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1721745710049995047" to /tmp/TestFunctionalparallelMountCmdany-port3844426748/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1721745710049995047" to /tmp/TestFunctionalparallelMountCmdany-port3844426748/001/test-1721745710049995047
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-054469 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (323.581185ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 ssh -- ls -la /mount-9p
E0723 14:41:50.988218 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/client.crt: no such file or directory
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 23 14:41 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 23 14:41 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 23 14:41 test-1721745710049995047
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 ssh cat /mount-9p/test-1721745710049995047
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-054469 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [e06838c0-da4f-4a3f-97d5-48d4d262f59b] Pending
helpers_test.go:344: "busybox-mount" [e06838c0-da4f-4a3f-97d5-48d4d262f59b] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [e06838c0-da4f-4a3f-97d5-48d4d262f59b] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [e06838c0-da4f-4a3f-97d5-48d4d262f59b] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 19.003972202s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-054469 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-054469 /tmp/TestFunctionalparallelMountCmdany-port3844426748/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (21.61s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-054469 /tmp/TestFunctionalparallelMountCmdspecific-port2719137398/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-054469 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (357.359264ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-054469 /tmp/TestFunctionalparallelMountCmdspecific-port2719137398/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-054469 ssh "sudo umount -f /mount-9p": exit status 1 (264.411753ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-054469 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-054469 /tmp/TestFunctionalparallelMountCmdspecific-port2719137398/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.85s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-054469 /tmp/TestFunctionalparallelMountCmdVerifyCleanup842053406/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-054469 /tmp/TestFunctionalparallelMountCmdVerifyCleanup842053406/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-054469 /tmp/TestFunctionalparallelMountCmdVerifyCleanup842053406/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-054469 ssh "findmnt -T" /mount1: exit status 1 (572.981096ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-054469 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-054469 /tmp/TestFunctionalparallelMountCmdVerifyCleanup842053406/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-054469 /tmp/TestFunctionalparallelMountCmdVerifyCleanup842053406/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-054469 /tmp/TestFunctionalparallelMountCmdVerifyCleanup842053406/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.83s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-054469 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20240719-e7903573
docker.io/kindest/kindnetd:v20240715-585640e9
docker.io/kicbase/echo-server:functional-054469
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-054469 image ls --format short --alsologtostderr:
I0723 14:42:35.212941 3352613 out.go:291] Setting OutFile to fd 1 ...
I0723 14:42:35.213052 3352613 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0723 14:42:35.213062 3352613 out.go:304] Setting ErrFile to fd 2...
I0723 14:42:35.213068 3352613 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0723 14:42:35.213313 3352613 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19319-3317687/.minikube/bin
I0723 14:42:35.213942 3352613 config.go:182] Loaded profile config "functional-054469": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0723 14:42:35.214118 3352613 config.go:182] Loaded profile config "functional-054469": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0723 14:42:35.214641 3352613 cli_runner.go:164] Run: docker container inspect functional-054469 --format={{.State.Status}}
I0723 14:42:35.231991 3352613 ssh_runner.go:195] Run: systemctl --version
I0723 14:42:35.232092 3352613 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-054469
I0723 14:42:35.248419 3352613 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37162 SSHKeyPath:/home/jenkins/minikube-integration/19319-3317687/.minikube/machines/functional-054469/id_rsa Username:docker}
I0723 14:42:35.335372 3352613 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-054469 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/pause                   | 3.9                | 829e9de338bd5 | 520kB  |
| docker.io/library/nginx                 | latest             | 443d199e8bfcc | 197MB  |
| localhost/my-image                      | functional-054469  | beb2711312335 | 1.64MB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | 2437cf7621777 | 58.8MB |
| registry.k8s.io/etcd                    | 3.5.12-0           | 014faa467e297 | 140MB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| registry.k8s.io/kube-scheduler          | v1.30.3            | d48f992a22722 | 61.6MB |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
| docker.io/kicbase/echo-server           | functional-054469  | ce2d2cda2d858 | 4.79MB |
| docker.io/kindest/kindnetd              | v20240719-e7903573 | f42786f8afd22 | 90.3MB |
| docker.io/library/nginx                 | alpine             | 5461b18aaccf3 | 46.7MB |
| gcr.io/k8s-minikube/busybox             | latest             | 71a676dd070f4 | 1.63MB |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| registry.k8s.io/kube-apiserver          | v1.30.3            | 61773190d42ff | 114MB  |
| registry.k8s.io/kube-controller-manager | v1.30.3            | 8e97cdb19e7cc | 108MB  |
| registry.k8s.io/kube-proxy              | v1.30.3            | 2351f570ed0ea | 89.2MB |
| docker.io/kindest/kindnetd              | v20240715-585640e9 | 5e32961ddcea3 | 90.3MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-054469 image ls --format table --alsologtostderr:
I0723 14:42:38.380696 3352954 out.go:291] Setting OutFile to fd 1 ...
I0723 14:42:38.380843 3352954 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0723 14:42:38.380850 3352954 out.go:304] Setting ErrFile to fd 2...
I0723 14:42:38.380854 3352954 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0723 14:42:38.381156 3352954 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19319-3317687/.minikube/bin
I0723 14:42:38.381811 3352954 config.go:182] Loaded profile config "functional-054469": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0723 14:42:38.381982 3352954 config.go:182] Loaded profile config "functional-054469": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0723 14:42:38.382554 3352954 cli_runner.go:164] Run: docker container inspect functional-054469 --format={{.State.Status}}
I0723 14:42:38.399457 3352954 ssh_runner.go:195] Run: systemctl --version
I0723 14:42:38.399513 3352954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-054469
I0723 14:42:38.419440 3352954 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37162 SSHKeyPath:/home/jenkins/minikube-integration/19319-3317687/.minikube/machines/functional-054469/id_rsa Username:docker}
I0723 14:42:38.507403 3352954 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-054469 image ls --format json --alsologtostderr:
[{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd","repoDigests":["registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b","registry.k8s.io/etcd@sha256:675d0e055ad04d9d6227bbb1aa88626bb1903a8b9b177c0353c6e1b3112952ad"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"140414767"},{"id":"61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca","repoDigests":["registry.k8s.io/kube-apiserver@sha256:30d6b23df5ccf427536840a904047f3cd946c9c78bf9750f0d82b18409d6089e","registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f
58221796631e107966c"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.3"],"size":"113538528"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["docker.io/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["docker.io/kicbase/echo-server:functional-054469"],"size":"4788229"},{"id":"f42786f8afd2214fc59fbf9a26531806f562488d4a7d7a31e8b5e9ff6289b800","repoDigests":["docker.io/kindest/kindnetd@sha256:14100a3a7aca6cad3de3f26ee342ad937ca7d2844b1983d3baa7bf5f491baa7a","docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"],"repoTags":["docker.io/kindest/kindnetd:v20240719-e7903573"],"size":"90281007"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee9
57b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1634527"},{"id":"8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:078d7873222b53b4530e619e8dc5bccf8420557f3be2b4996a65e59ba4a09499","registry.k8s.io/kube-controller-manager
@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.3"],"size":"108229958"},{"id":"5461b18aaccf366faf9fba071a5f1ac333cd13435366b32c5e9b8ec903fa18a1","repoDigests":["docker.io/library/nginx@sha256:1e67a3c8607fe555f47dc8a72f25424b10273639136c061c508628da3112f90e","docker.io/library/nginx@sha256:a7164ab2224553c2da2303d490474d4d546d2141eef1c6367a38d37d46992c62"],"repoTags":["docker.io/library/nginx:alpine"],"size":"46671377"},{"id":"443d199e8bfcce69c2aa494b36b5f8b04c3b183277cd19190e9589fd8552d618","repoDigests":["docker.io/library/nginx@sha256:97b83c73d3165f2deb95e02459a6e905f092260cd991f4c4eae2f192ddb99cbe","docker.io/library/nginx@sha256:9a3f8e8b2777851f98c569c91f8ebd6f21b0af188c245c38a0179086bb27782e"],"repoTags":["docker.io/library/nginx:latest"],"size":"197104786"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6
268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"},{"id":"71d8d2c2e778d4a88f24bfeff023d24aefb01f3379528470bd21d084da08758b","repoDigests":["docker.io/library/0458f9b70ad1867d94f6c751d45f69d52d996395973f42f2e348be3a08b35c3c-tmp@sha256:9249612f35b0b29ed5999c40e500f8657d611ed29875302a734ae6745f2d736b"],"repoTags":[],"size":"1637644"},{"id":"d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355","repoDigests":["registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4","registry.k8s.io/kube-scheduler@sha256:f194dea192a672732bc45ef2e7a0bcf28080ae6bd0626bd2c444edda987d7b95"],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.3"],"size":"61568326"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"3d18732f8686cc3c878055
d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"520014"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"5e32961ddcea3ade65511b2e27f675bbda25305639279f8b708014019e8cebb2","repoDigests":["docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493","docker.io/kindest/kindnetd@sha256:c
a8545687e833593ef3047fdbb04957ab9a32153bc36738760b6975879ada987"],"repoTags":["docker.io/kindest/kindnetd:v20240715-585640e9"],"size":"90278450"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"beb271131233510c3642537c026d0c5e87cb8ca40862099d59b15b0bac5ae26c","repoDigests":["localhost/my-image@sha256:82cb88455f984af612d8bb815e21425d12e34dd66baec03099051876e4bf22b5"],"repoTags":["localhost/my-image:functional-054469"],"size":"1640225"},{"id":"2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:ba9e70dbdf0ff8a77ea63451bb1241d08819471730fe
7a35a218a8db2ef7890c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"58812704"},{"id":"2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be","repoDigests":["registry.k8s.io/kube-proxy@sha256:22d1f9b0734b7dbb2266b889edf456303746e750129e4d7f20699f23e9a31acc","registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.3"],"size":"89199511"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-054469 image ls --format json --alsologtostderr:
I0723 14:42:38.152926 3352923 out.go:291] Setting OutFile to fd 1 ...
I0723 14:42:38.153092 3352923 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0723 14:42:38.153101 3352923 out.go:304] Setting ErrFile to fd 2...
I0723 14:42:38.153106 3352923 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0723 14:42:38.153346 3352923 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19319-3317687/.minikube/bin
I0723 14:42:38.154129 3352923 config.go:182] Loaded profile config "functional-054469": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0723 14:42:38.154261 3352923 config.go:182] Loaded profile config "functional-054469": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0723 14:42:38.154799 3352923 cli_runner.go:164] Run: docker container inspect functional-054469 --format={{.State.Status}}
I0723 14:42:38.173903 3352923 ssh_runner.go:195] Run: systemctl --version
I0723 14:42:38.173959 3352923 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-054469
I0723 14:42:38.193645 3352923 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37162 SSHKeyPath:/home/jenkins/minikube-integration/19319-3317687/.minikube/machines/functional-054469/id_rsa Username:docker}
I0723 14:42:38.283305 3352923 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-054469 image ls --format yaml --alsologtostderr:
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "520014"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- docker.io/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- docker.io/kicbase/echo-server:functional-054469
size: "4788229"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:078d7873222b53b4530e619e8dc5bccf8420557f3be2b4996a65e59ba4a09499
- registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "108229958"
- id: d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4
- registry.k8s.io/kube-scheduler@sha256:f194dea192a672732bc45ef2e7a0bcf28080ae6bd0626bd2c444edda987d7b95
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "61568326"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: f42786f8afd2214fc59fbf9a26531806f562488d4a7d7a31e8b5e9ff6289b800
repoDigests:
- docker.io/kindest/kindnetd@sha256:14100a3a7aca6cad3de3f26ee342ad937ca7d2844b1983d3baa7bf5f491baa7a
- docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a
repoTags:
- docker.io/kindest/kindnetd:v20240719-e7903573
size: "90281007"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:30d6b23df5ccf427536840a904047f3cd946c9c78bf9750f0d82b18409d6089e
- registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "113538528"
- id: 2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be
repoDigests:
- registry.k8s.io/kube-proxy@sha256:22d1f9b0734b7dbb2266b889edf456303746e750129e4d7f20699f23e9a31acc
- registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "89199511"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: 5461b18aaccf366faf9fba071a5f1ac333cd13435366b32c5e9b8ec903fa18a1
repoDigests:
- docker.io/library/nginx@sha256:1e67a3c8607fe555f47dc8a72f25424b10273639136c061c508628da3112f90e
- docker.io/library/nginx@sha256:a7164ab2224553c2da2303d490474d4d546d2141eef1c6367a38d37d46992c62
repoTags:
- docker.io/library/nginx:alpine
size: "46671377"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:ba9e70dbdf0ff8a77ea63451bb1241d08819471730fe7a35a218a8db2ef7890c
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "58812704"
- id: 014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd
repoDigests:
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
- registry.k8s.io/etcd@sha256:675d0e055ad04d9d6227bbb1aa88626bb1903a8b9b177c0353c6e1b3112952ad
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "140414767"
- id: 5e32961ddcea3ade65511b2e27f675bbda25305639279f8b708014019e8cebb2
repoDigests:
- docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493
- docker.io/kindest/kindnetd@sha256:ca8545687e833593ef3047fdbb04957ab9a32153bc36738760b6975879ada987
repoTags:
- docker.io/kindest/kindnetd:v20240715-585640e9
size: "90278450"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 443d199e8bfcce69c2aa494b36b5f8b04c3b183277cd19190e9589fd8552d618
repoDigests:
- docker.io/library/nginx@sha256:97b83c73d3165f2deb95e02459a6e905f092260cd991f4c4eae2f192ddb99cbe
- docker.io/library/nginx@sha256:9a3f8e8b2777851f98c569c91f8ebd6f21b0af188c245c38a0179086bb27782e
repoTags:
- docker.io/library/nginx:latest
size: "197104786"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-054469 image ls --format yaml --alsologtostderr:
I0723 14:42:35.432635 3352644 out.go:291] Setting OutFile to fd 1 ...
I0723 14:42:35.432819 3352644 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0723 14:42:35.432844 3352644 out.go:304] Setting ErrFile to fd 2...
I0723 14:42:35.432863 3352644 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0723 14:42:35.433121 3352644 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19319-3317687/.minikube/bin
I0723 14:42:35.433795 3352644 config.go:182] Loaded profile config "functional-054469": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0723 14:42:35.433987 3352644 config.go:182] Loaded profile config "functional-054469": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0723 14:42:35.434504 3352644 cli_runner.go:164] Run: docker container inspect functional-054469 --format={{.State.Status}}
I0723 14:42:35.451863 3352644 ssh_runner.go:195] Run: systemctl --version
I0723 14:42:35.451932 3352644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-054469
I0723 14:42:35.467618 3352644 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37162 SSHKeyPath:/home/jenkins/minikube-integration/19319-3317687/.minikube/machines/functional-054469/id_rsa Username:docker}
I0723 14:42:35.555469 3352644 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-054469 ssh pgrep buildkitd: exit status 1 (249.889778ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 image build -t localhost/my-image:functional-054469 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-054469 image build -t localhost/my-image:functional-054469 testdata/build --alsologtostderr: (2.014569224s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-arm64 -p functional-054469 image build -t localhost/my-image:functional-054469 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 71d8d2c2e77
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-054469
--> beb27113123
Successfully tagged localhost/my-image:functional-054469
beb271131233510c3642537c026d0c5e87cb8ca40862099d59b15b0bac5ae26c
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-054469 image build -t localhost/my-image:functional-054469 testdata/build --alsologtostderr:
I0723 14:42:35.906241 3352733 out.go:291] Setting OutFile to fd 1 ...
I0723 14:42:35.907152 3352733 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0723 14:42:35.907194 3352733 out.go:304] Setting ErrFile to fd 2...
I0723 14:42:35.907214 3352733 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0723 14:42:35.907503 3352733 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19319-3317687/.minikube/bin
I0723 14:42:35.908143 3352733 config.go:182] Loaded profile config "functional-054469": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0723 14:42:35.910105 3352733 config.go:182] Loaded profile config "functional-054469": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0723 14:42:35.910687 3352733 cli_runner.go:164] Run: docker container inspect functional-054469 --format={{.State.Status}}
I0723 14:42:35.927589 3352733 ssh_runner.go:195] Run: systemctl --version
I0723 14:42:35.927642 3352733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-054469
I0723 14:42:35.943907 3352733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37162 SSHKeyPath:/home/jenkins/minikube-integration/19319-3317687/.minikube/machines/functional-054469/id_rsa Username:docker}
I0723 14:42:36.031376 3352733 build_images.go:161] Building image from path: /tmp/build.3129496068.tar
I0723 14:42:36.031479 3352733 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0723 14:42:36.040731 3352733 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3129496068.tar
I0723 14:42:36.044117 3352733 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3129496068.tar: stat -c "%s %y" /var/lib/minikube/build/build.3129496068.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3129496068.tar': No such file or directory
I0723 14:42:36.044153 3352733 ssh_runner.go:362] scp /tmp/build.3129496068.tar --> /var/lib/minikube/build/build.3129496068.tar (3072 bytes)
I0723 14:42:36.070348 3352733 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3129496068
I0723 14:42:36.079902 3352733 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3129496068 -xf /var/lib/minikube/build/build.3129496068.tar
I0723 14:42:36.089548 3352733 crio.go:315] Building image: /var/lib/minikube/build/build.3129496068
I0723 14:42:36.089629 3352733 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-054469 /var/lib/minikube/build/build.3129496068 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I0723 14:42:37.844961 3352733 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-054469 /var/lib/minikube/build/build.3129496068 --cgroup-manager=cgroupfs: (1.755302394s)
I0723 14:42:37.845029 3352733 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3129496068
I0723 14:42:37.853572 3352733 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3129496068.tar
I0723 14:42:37.861922 3352733 build_images.go:217] Built localhost/my-image:functional-054469 from /tmp/build.3129496068.tar
I0723 14:42:37.861959 3352733 build_images.go:133] succeeded building to: functional-054469
I0723 14:42:37.861964 3352733 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-054469
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 image load --daemon docker.io/kicbase/echo-server:functional-054469 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-054469 image load --daemon docker.io/kicbase/echo-server:functional-054469 --alsologtostderr: (1.023050565s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 image load --daemon docker.io/kicbase/echo-server:functional-054469 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-054469
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 image load --daemon docker.io/kicbase/echo-server:functional-054469 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 image save docker.io/kicbase/echo-server:functional-054469 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 image rm docker.io/kicbase/echo-server:functional-054469 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-054469
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 image save --daemon docker.io/kicbase/echo-server:functional-054469 --alsologtostderr
E0723 14:42:31.948882 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/client.crt: no such file or directory
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-054469
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-054469 update-context --alsologtostderr -v=2
E0723 14:43:53.869967 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-054469
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-054469
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-054469
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (188.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-750309 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0723 14:46:10.026291 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/client.crt: no such file or directory
E0723 14:46:17.136623 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/functional-054469/client.crt: no such file or directory
E0723 14:46:17.141978 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/functional-054469/client.crt: no such file or directory
E0723 14:46:17.152303 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/functional-054469/client.crt: no such file or directory
E0723 14:46:17.172602 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/functional-054469/client.crt: no such file or directory
E0723 14:46:17.213326 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/functional-054469/client.crt: no such file or directory
E0723 14:46:17.293634 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/functional-054469/client.crt: no such file or directory
E0723 14:46:17.454043 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/functional-054469/client.crt: no such file or directory
E0723 14:46:17.774953 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/functional-054469/client.crt: no such file or directory
E0723 14:46:18.415150 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/functional-054469/client.crt: no such file or directory
E0723 14:46:19.695897 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/functional-054469/client.crt: no such file or directory
E0723 14:46:22.256038 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/functional-054469/client.crt: no such file or directory
E0723 14:46:27.377023 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/functional-054469/client.crt: no such file or directory
E0723 14:46:37.617774 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/functional-054469/client.crt: no such file or directory
E0723 14:46:37.711048 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/client.crt: no such file or directory
E0723 14:46:58.098662 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/functional-054469/client.crt: no such file or directory
E0723 14:47:39.059317 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/functional-054469/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-750309 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (3m8.063793814s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-750309 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (188.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-750309 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-750309 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-750309 -- rollout status deployment/busybox: (4.922714429s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-750309 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-750309 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-750309 -- exec busybox-fc5497c4f-5w9dz -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-750309 -- exec busybox-fc5497c4f-jkd4g -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-750309 -- exec busybox-fc5497c4f-r67td -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-750309 -- exec busybox-fc5497c4f-5w9dz -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-750309 -- exec busybox-fc5497c4f-jkd4g -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-750309 -- exec busybox-fc5497c4f-r67td -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-750309 -- exec busybox-fc5497c4f-5w9dz -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-750309 -- exec busybox-fc5497c4f-jkd4g -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-750309 -- exec busybox-fc5497c4f-r67td -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-750309 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-750309 -- exec busybox-fc5497c4f-5w9dz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-750309 -- exec busybox-fc5497c4f-5w9dz -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-750309 -- exec busybox-fc5497c4f-jkd4g -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-750309 -- exec busybox-fc5497c4f-jkd4g -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-750309 -- exec busybox-fc5497c4f-r67td -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-750309 -- exec busybox-fc5497c4f-r67td -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (35.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-750309 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-750309 -v=7 --alsologtostderr: (34.443838582s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-750309 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (35.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-750309 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (18.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-750309 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-750309 cp testdata/cp-test.txt ha-750309:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-750309 ssh -n ha-750309 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-750309 cp ha-750309:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1927519924/001/cp-test_ha-750309.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-750309 ssh -n ha-750309 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-750309 cp ha-750309:/home/docker/cp-test.txt ha-750309-m02:/home/docker/cp-test_ha-750309_ha-750309-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-750309 ssh -n ha-750309 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-750309 ssh -n ha-750309-m02 "sudo cat /home/docker/cp-test_ha-750309_ha-750309-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-750309 cp ha-750309:/home/docker/cp-test.txt ha-750309-m03:/home/docker/cp-test_ha-750309_ha-750309-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-750309 ssh -n ha-750309 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-750309 ssh -n ha-750309-m03 "sudo cat /home/docker/cp-test_ha-750309_ha-750309-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-750309 cp ha-750309:/home/docker/cp-test.txt ha-750309-m04:/home/docker/cp-test_ha-750309_ha-750309-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-750309 ssh -n ha-750309 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-750309 ssh -n ha-750309-m04 "sudo cat /home/docker/cp-test_ha-750309_ha-750309-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-750309 cp testdata/cp-test.txt ha-750309-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-750309 ssh -n ha-750309-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-750309 cp ha-750309-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1927519924/001/cp-test_ha-750309-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-750309 ssh -n ha-750309-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-750309 cp ha-750309-m02:/home/docker/cp-test.txt ha-750309:/home/docker/cp-test_ha-750309-m02_ha-750309.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-750309 ssh -n ha-750309-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-750309 ssh -n ha-750309 "sudo cat /home/docker/cp-test_ha-750309-m02_ha-750309.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-750309 cp ha-750309-m02:/home/docker/cp-test.txt ha-750309-m03:/home/docker/cp-test_ha-750309-m02_ha-750309-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-750309 ssh -n ha-750309-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-750309 ssh -n ha-750309-m03 "sudo cat /home/docker/cp-test_ha-750309-m02_ha-750309-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-750309 cp ha-750309-m02:/home/docker/cp-test.txt ha-750309-m04:/home/docker/cp-test_ha-750309-m02_ha-750309-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-750309 ssh -n ha-750309-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-750309 ssh -n ha-750309-m04 "sudo cat /home/docker/cp-test_ha-750309-m02_ha-750309-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-750309 cp testdata/cp-test.txt ha-750309-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-750309 ssh -n ha-750309-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-750309 cp ha-750309-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1927519924/001/cp-test_ha-750309-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-750309 ssh -n ha-750309-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-750309 cp ha-750309-m03:/home/docker/cp-test.txt ha-750309:/home/docker/cp-test_ha-750309-m03_ha-750309.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-750309 ssh -n ha-750309-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-750309 ssh -n ha-750309 "sudo cat /home/docker/cp-test_ha-750309-m03_ha-750309.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-750309 cp ha-750309-m03:/home/docker/cp-test.txt ha-750309-m02:/home/docker/cp-test_ha-750309-m03_ha-750309-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-750309 ssh -n ha-750309-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-750309 ssh -n ha-750309-m02 "sudo cat /home/docker/cp-test_ha-750309-m03_ha-750309-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-750309 cp ha-750309-m03:/home/docker/cp-test.txt ha-750309-m04:/home/docker/cp-test_ha-750309-m03_ha-750309-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-750309 ssh -n ha-750309-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-750309 ssh -n ha-750309-m04 "sudo cat /home/docker/cp-test_ha-750309-m03_ha-750309-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-750309 cp testdata/cp-test.txt ha-750309-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-750309 ssh -n ha-750309-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-750309 cp ha-750309-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1927519924/001/cp-test_ha-750309-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-750309 ssh -n ha-750309-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-750309 cp ha-750309-m04:/home/docker/cp-test.txt ha-750309:/home/docker/cp-test_ha-750309-m04_ha-750309.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-750309 ssh -n ha-750309-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-750309 ssh -n ha-750309 "sudo cat /home/docker/cp-test_ha-750309-m04_ha-750309.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-750309 cp ha-750309-m04:/home/docker/cp-test.txt ha-750309-m02:/home/docker/cp-test_ha-750309-m04_ha-750309-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-750309 ssh -n ha-750309-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-750309 ssh -n ha-750309-m02 "sudo cat /home/docker/cp-test_ha-750309-m04_ha-750309-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-750309 cp ha-750309-m04:/home/docker/cp-test.txt ha-750309-m03:/home/docker/cp-test_ha-750309-m04_ha-750309-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-750309 ssh -n ha-750309-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-750309 ssh -n ha-750309-m03 "sudo cat /home/docker/cp-test_ha-750309-m04_ha-750309-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (18.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-750309 node stop m02 -v=7 --alsologtostderr
E0723 14:49:00.979552 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/functional-054469/client.crt: no such file or directory
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-750309 node stop m02 -v=7 --alsologtostderr: (11.974674631s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-750309 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-750309 status -v=7 --alsologtostderr: exit status 7 (717.86031ms)

                                                
                                                
-- stdout --
	ha-750309
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-750309-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-750309-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-750309-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 14:49:06.636900 3369478 out.go:291] Setting OutFile to fd 1 ...
	I0723 14:49:06.637125 3369478 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 14:49:06.637157 3369478 out.go:304] Setting ErrFile to fd 2...
	I0723 14:49:06.637181 3369478 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 14:49:06.637484 3369478 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19319-3317687/.minikube/bin
	I0723 14:49:06.637853 3369478 out.go:298] Setting JSON to false
	I0723 14:49:06.638062 3369478 mustload.go:65] Loading cluster: ha-750309
	I0723 14:49:06.638204 3369478 notify.go:220] Checking for updates...
	I0723 14:49:06.638744 3369478 config.go:182] Loaded profile config "ha-750309": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 14:49:06.638781 3369478 status.go:255] checking status of ha-750309 ...
	I0723 14:49:06.639488 3369478 cli_runner.go:164] Run: docker container inspect ha-750309 --format={{.State.Status}}
	I0723 14:49:06.661398 3369478 status.go:330] ha-750309 host status = "Running" (err=<nil>)
	I0723 14:49:06.661421 3369478 host.go:66] Checking if "ha-750309" exists ...
	I0723 14:49:06.661821 3369478 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-750309
	I0723 14:49:06.697209 3369478 host.go:66] Checking if "ha-750309" exists ...
	I0723 14:49:06.697593 3369478 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0723 14:49:06.697649 3369478 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-750309
	I0723 14:49:06.716063 3369478 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37167 SSHKeyPath:/home/jenkins/minikube-integration/19319-3317687/.minikube/machines/ha-750309/id_rsa Username:docker}
	I0723 14:49:06.808284 3369478 ssh_runner.go:195] Run: systemctl --version
	I0723 14:49:06.812824 3369478 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 14:49:06.826076 3369478 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0723 14:49:06.883405 3369478 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:71 SystemTime:2024-07-23 14:49:06.873503506 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1065-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.0]] Warnings:<nil>}}
	I0723 14:49:06.883990 3369478 kubeconfig.go:125] found "ha-750309" server: "https://192.168.49.254:8443"
	I0723 14:49:06.884024 3369478 api_server.go:166] Checking apiserver status ...
	I0723 14:49:06.884133 3369478 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 14:49:06.895087 3369478 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1460/cgroup
	I0723 14:49:06.904491 3369478 api_server.go:182] apiserver freezer: "4:freezer:/docker/dd75eda29045ccb2ba91ef922fc4cdeb62015604dc144afaabc59b5703c3b286/crio/crio-e50803077c1270fa95826e039a3ddd039b56d32238be3f85f366085259566f77"
	I0723 14:49:06.904583 3369478 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/dd75eda29045ccb2ba91ef922fc4cdeb62015604dc144afaabc59b5703c3b286/crio/crio-e50803077c1270fa95826e039a3ddd039b56d32238be3f85f366085259566f77/freezer.state
	I0723 14:49:06.914453 3369478 api_server.go:204] freezer state: "THAWED"
	I0723 14:49:06.914488 3369478 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0723 14:49:06.923570 3369478 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0723 14:49:06.923601 3369478 status.go:422] ha-750309 apiserver status = Running (err=<nil>)
	I0723 14:49:06.923616 3369478 status.go:257] ha-750309 status: &{Name:ha-750309 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0723 14:49:06.923665 3369478 status.go:255] checking status of ha-750309-m02 ...
	I0723 14:49:06.924020 3369478 cli_runner.go:164] Run: docker container inspect ha-750309-m02 --format={{.State.Status}}
	I0723 14:49:06.940494 3369478 status.go:330] ha-750309-m02 host status = "Stopped" (err=<nil>)
	I0723 14:49:06.940514 3369478 status.go:343] host is not running, skipping remaining checks
	I0723 14:49:06.940521 3369478 status.go:257] ha-750309-m02 status: &{Name:ha-750309-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0723 14:49:06.940541 3369478 status.go:255] checking status of ha-750309-m03 ...
	I0723 14:49:06.940847 3369478 cli_runner.go:164] Run: docker container inspect ha-750309-m03 --format={{.State.Status}}
	I0723 14:49:06.958491 3369478 status.go:330] ha-750309-m03 host status = "Running" (err=<nil>)
	I0723 14:49:06.958511 3369478 host.go:66] Checking if "ha-750309-m03" exists ...
	I0723 14:49:06.958875 3369478 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-750309-m03
	I0723 14:49:06.975720 3369478 host.go:66] Checking if "ha-750309-m03" exists ...
	I0723 14:49:06.976061 3369478 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0723 14:49:06.976105 3369478 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-750309-m03
	I0723 14:49:06.992589 3369478 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37177 SSHKeyPath:/home/jenkins/minikube-integration/19319-3317687/.minikube/machines/ha-750309-m03/id_rsa Username:docker}
	I0723 14:49:07.080334 3369478 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 14:49:07.093155 3369478 kubeconfig.go:125] found "ha-750309" server: "https://192.168.49.254:8443"
	I0723 14:49:07.093184 3369478 api_server.go:166] Checking apiserver status ...
	I0723 14:49:07.093232 3369478 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 14:49:07.104242 3369478 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1395/cgroup
	I0723 14:49:07.114043 3369478 api_server.go:182] apiserver freezer: "4:freezer:/docker/a5907bd32a3454733aeae0f33946de5f50a845f820d871fafd67bf0515503052/crio/crio-026769b37b061fbaeaf18f5cf84caac16e389ab1fb1009d45f60f6e78defdf51"
	I0723 14:49:07.114146 3369478 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/a5907bd32a3454733aeae0f33946de5f50a845f820d871fafd67bf0515503052/crio/crio-026769b37b061fbaeaf18f5cf84caac16e389ab1fb1009d45f60f6e78defdf51/freezer.state
	I0723 14:49:07.123088 3369478 api_server.go:204] freezer state: "THAWED"
	I0723 14:49:07.123123 3369478 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0723 14:49:07.130776 3369478 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0723 14:49:07.130812 3369478 status.go:422] ha-750309-m03 apiserver status = Running (err=<nil>)
	I0723 14:49:07.130854 3369478 status.go:257] ha-750309-m03 status: &{Name:ha-750309-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0723 14:49:07.130881 3369478 status.go:255] checking status of ha-750309-m04 ...
	I0723 14:49:07.131207 3369478 cli_runner.go:164] Run: docker container inspect ha-750309-m04 --format={{.State.Status}}
	I0723 14:49:07.147666 3369478 status.go:330] ha-750309-m04 host status = "Running" (err=<nil>)
	I0723 14:49:07.147691 3369478 host.go:66] Checking if "ha-750309-m04" exists ...
	I0723 14:49:07.148037 3369478 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-750309-m04
	I0723 14:49:07.171998 3369478 host.go:66] Checking if "ha-750309-m04" exists ...
	I0723 14:49:07.172300 3369478 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0723 14:49:07.172355 3369478 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-750309-m04
	I0723 14:49:07.193224 3369478 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37182 SSHKeyPath:/home/jenkins/minikube-integration/19319-3317687/.minikube/machines/ha-750309-m04/id_rsa Username:docker}
	I0723 14:49:07.279851 3369478 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 14:49:07.295556 3369478 status.go:257] ha-750309-m04 status: &{Name:ha-750309-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (32.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-750309 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-750309 node start m02 -v=7 --alsologtostderr: (31.707340809s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-750309 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-750309 status -v=7 --alsologtostderr: (1.165508378s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (32.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (172.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-750309 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-750309 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-750309 -v=7 --alsologtostderr: (36.87914896s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-750309 --wait=true -v=7 --alsologtostderr
E0723 14:51:10.025938 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/client.crt: no such file or directory
E0723 14:51:17.136984 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/functional-054469/client.crt: no such file or directory
E0723 14:51:44.819695 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/functional-054469/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-750309 --wait=true -v=7 --alsologtostderr: (2m15.466797539s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-750309
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (172.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (13.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-750309 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-750309 node delete m03 -v=7 --alsologtostderr: (11.973233077s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-750309 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (13.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-750309 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-750309 stop -v=7 --alsologtostderr: (35.643117111s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-750309 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-750309 status -v=7 --alsologtostderr: exit status 7 (103.274059ms)

                                                
                                                
-- stdout --
	ha-750309
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-750309-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-750309-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 14:53:23.405020 3383355 out.go:291] Setting OutFile to fd 1 ...
	I0723 14:53:23.405226 3383355 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 14:53:23.405240 3383355 out.go:304] Setting ErrFile to fd 2...
	I0723 14:53:23.405247 3383355 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 14:53:23.405499 3383355 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19319-3317687/.minikube/bin
	I0723 14:53:23.405719 3383355 out.go:298] Setting JSON to false
	I0723 14:53:23.405805 3383355 mustload.go:65] Loading cluster: ha-750309
	I0723 14:53:23.405865 3383355 notify.go:220] Checking for updates...
	I0723 14:53:23.406863 3383355 config.go:182] Loaded profile config "ha-750309": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 14:53:23.406887 3383355 status.go:255] checking status of ha-750309 ...
	I0723 14:53:23.407468 3383355 cli_runner.go:164] Run: docker container inspect ha-750309 --format={{.State.Status}}
	I0723 14:53:23.424885 3383355 status.go:330] ha-750309 host status = "Stopped" (err=<nil>)
	I0723 14:53:23.424906 3383355 status.go:343] host is not running, skipping remaining checks
	I0723 14:53:23.424914 3383355 status.go:257] ha-750309 status: &{Name:ha-750309 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0723 14:53:23.424937 3383355 status.go:255] checking status of ha-750309-m02 ...
	I0723 14:53:23.425244 3383355 cli_runner.go:164] Run: docker container inspect ha-750309-m02 --format={{.State.Status}}
	I0723 14:53:23.441904 3383355 status.go:330] ha-750309-m02 host status = "Stopped" (err=<nil>)
	I0723 14:53:23.441934 3383355 status.go:343] host is not running, skipping remaining checks
	I0723 14:53:23.441943 3383355 status.go:257] ha-750309-m02 status: &{Name:ha-750309-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0723 14:53:23.441963 3383355 status.go:255] checking status of ha-750309-m04 ...
	I0723 14:53:23.442279 3383355 cli_runner.go:164] Run: docker container inspect ha-750309-m04 --format={{.State.Status}}
	I0723 14:53:23.463940 3383355 status.go:330] ha-750309-m04 host status = "Stopped" (err=<nil>)
	I0723 14:53:23.463966 3383355 status.go:343] host is not running, skipping remaining checks
	I0723 14:53:23.463974 3383355 status.go:257] ha-750309-m04 status: &{Name:ha-750309-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (120.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-750309 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-750309 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (1m59.58481361s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-750309 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (120.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (74.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-750309 --control-plane -v=7 --alsologtostderr
E0723 14:56:10.027270 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/client.crt: no such file or directory
E0723 14:56:17.136936 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/functional-054469/client.crt: no such file or directory
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-750309 --control-plane -v=7 --alsologtostderr: (1m13.224402948s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-750309 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (74.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.73s)

                                                
                                    
x
+
TestJSONOutput/start/Command (60.05s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-838095 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E0723 14:57:33.072453 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-838095 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (1m0.04248814s)
--- PASS: TestJSONOutput/start/Command (60.05s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.74s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-838095 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.74s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-838095 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.91s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-838095 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-838095 --output=json --user=testUser: (5.914667564s)
--- PASS: TestJSONOutput/stop/Command (5.91s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-657178 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-657178 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (78.22914ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"18184a79-6317-4b8d-bcea-5b5a9a5fbf24","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-657178] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"593e46fc-a6b2-4be4-a8e7-449be1afbb57","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19319"}}
	{"specversion":"1.0","id":"044f7b44-6593-4688-9c83-3c72c9a0499b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"bfc84beb-d34c-4c4e-b19b-0ff366b49b33","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19319-3317687/kubeconfig"}}
	{"specversion":"1.0","id":"47eb6cdb-184a-4c0b-afcb-68d97f71ef29","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19319-3317687/.minikube"}}
	{"specversion":"1.0","id":"07644186-89eb-479f-863c-41acd731891a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"6187d62c-4577-485e-927a-3e3f26eed3af","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"37b8f6ee-38a4-4c90-9fc3-245e1bdd6c3e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-657178" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-657178
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (37.1s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-373058 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-373058 --network=: (35.05800658s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-373058" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-373058
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-373058: (2.018461457s)
--- PASS: TestKicCustomNetwork/create_custom_network (37.10s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (33.31s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-735946 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-735946 --network=bridge: (31.263759554s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-735946" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-735946
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-735946: (2.018373206s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (33.31s)

                                                
                                    
x
+
TestKicExistingNetwork (33.3s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-406807 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-406807 --network=existing-network: (31.206297178s)
helpers_test.go:175: Cleaning up "existing-network-406807" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-406807
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-406807: (1.931991648s)
--- PASS: TestKicExistingNetwork (33.30s)

                                                
                                    
x
+
TestKicCustomSubnet (37.49s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-653676 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-653676 --subnet=192.168.60.0/24: (35.356495044s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-653676 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-653676" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-653676
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-653676: (2.104614713s)
--- PASS: TestKicCustomSubnet (37.49s)

                                                
                                    
x
+
TestKicStaticIP (33.13s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-036104 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-036104 --static-ip=192.168.200.200: (30.840101673s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-036104 ip
helpers_test.go:175: Cleaning up "static-ip-036104" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-036104
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-036104: (2.151970271s)
--- PASS: TestKicStaticIP (33.13s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (68.3s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-296458 --driver=docker  --container-runtime=crio
E0723 15:01:10.026580 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/client.crt: no such file or directory
E0723 15:01:17.136151 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/functional-054469/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-296458 --driver=docker  --container-runtime=crio: (29.62830836s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-299231 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-299231 --driver=docker  --container-runtime=crio: (33.096350711s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-296458
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-299231
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-299231" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-299231
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-299231: (1.943475036s)
helpers_test.go:175: Cleaning up "first-296458" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-296458
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-296458: (2.455986396s)
--- PASS: TestMinikubeProfile (68.30s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.53s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-990481 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-990481 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.527805945s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.53s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-990481 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.45s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-003503 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-003503 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.449797742s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.45s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-003503 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.59s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-990481 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-990481 --alsologtostderr -v=5: (1.592530803s)
--- PASS: TestMountStart/serial/DeleteFirst (1.59s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-003503 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-003503
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-003503: (1.199729503s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.29s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-003503
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-003503: (7.293344916s)
--- PASS: TestMountStart/serial/RestartStopped (8.29s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-003503 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (85.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-524888 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0723 15:02:40.180745 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/functional-054469/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-524888 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m24.721392113s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524888 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (85.22s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-524888 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-524888 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-524888 -- rollout status deployment/busybox: (2.861925357s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-524888 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-524888 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-524888 -- exec busybox-fc5497c4f-5xgvn -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-524888 -- exec busybox-fc5497c4f-5z4vb -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-524888 -- exec busybox-fc5497c4f-5xgvn -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-524888 -- exec busybox-fc5497c4f-5z4vb -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-524888 -- exec busybox-fc5497c4f-5xgvn -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-524888 -- exec busybox-fc5497c4f-5z4vb -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.79s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-524888 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-524888 -- exec busybox-fc5497c4f-5xgvn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-524888 -- exec busybox-fc5497c4f-5xgvn -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-524888 -- exec busybox-fc5497c4f-5z4vb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-524888 -- exec busybox-fc5497c4f-5z4vb -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.16s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (28.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-524888 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-524888 -v 3 --alsologtostderr: (27.945891809s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524888 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (28.58s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-524888 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.33s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524888 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524888 cp testdata/cp-test.txt multinode-524888:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524888 ssh -n multinode-524888 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524888 cp multinode-524888:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3254306631/001/cp-test_multinode-524888.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524888 ssh -n multinode-524888 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524888 cp multinode-524888:/home/docker/cp-test.txt multinode-524888-m02:/home/docker/cp-test_multinode-524888_multinode-524888-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524888 ssh -n multinode-524888 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524888 ssh -n multinode-524888-m02 "sudo cat /home/docker/cp-test_multinode-524888_multinode-524888-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524888 cp multinode-524888:/home/docker/cp-test.txt multinode-524888-m03:/home/docker/cp-test_multinode-524888_multinode-524888-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524888 ssh -n multinode-524888 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524888 ssh -n multinode-524888-m03 "sudo cat /home/docker/cp-test_multinode-524888_multinode-524888-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524888 cp testdata/cp-test.txt multinode-524888-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524888 ssh -n multinode-524888-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524888 cp multinode-524888-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3254306631/001/cp-test_multinode-524888-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524888 ssh -n multinode-524888-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524888 cp multinode-524888-m02:/home/docker/cp-test.txt multinode-524888:/home/docker/cp-test_multinode-524888-m02_multinode-524888.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524888 ssh -n multinode-524888-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524888 ssh -n multinode-524888 "sudo cat /home/docker/cp-test_multinode-524888-m02_multinode-524888.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524888 cp multinode-524888-m02:/home/docker/cp-test.txt multinode-524888-m03:/home/docker/cp-test_multinode-524888-m02_multinode-524888-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524888 ssh -n multinode-524888-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524888 ssh -n multinode-524888-m03 "sudo cat /home/docker/cp-test_multinode-524888-m02_multinode-524888-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524888 cp testdata/cp-test.txt multinode-524888-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524888 ssh -n multinode-524888-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524888 cp multinode-524888-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3254306631/001/cp-test_multinode-524888-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524888 ssh -n multinode-524888-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524888 cp multinode-524888-m03:/home/docker/cp-test.txt multinode-524888:/home/docker/cp-test_multinode-524888-m03_multinode-524888.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524888 ssh -n multinode-524888-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524888 ssh -n multinode-524888 "sudo cat /home/docker/cp-test_multinode-524888-m03_multinode-524888.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524888 cp multinode-524888-m03:/home/docker/cp-test.txt multinode-524888-m02:/home/docker/cp-test_multinode-524888-m03_multinode-524888-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524888 ssh -n multinode-524888-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524888 ssh -n multinode-524888-m02 "sudo cat /home/docker/cp-test_multinode-524888-m03_multinode-524888-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.73s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524888 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-524888 node stop m03: (1.21716744s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524888 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-524888 status: exit status 7 (514.575966ms)

                                                
                                                
-- stdout --
	multinode-524888
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-524888-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-524888-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524888 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-524888 status --alsologtostderr: exit status 7 (494.782737ms)

                                                
                                                
-- stdout --
	multinode-524888
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-524888-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-524888-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 15:04:40.910631 3437923 out.go:291] Setting OutFile to fd 1 ...
	I0723 15:04:40.910830 3437923 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 15:04:40.910860 3437923 out.go:304] Setting ErrFile to fd 2...
	I0723 15:04:40.910880 3437923 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 15:04:40.911178 3437923 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19319-3317687/.minikube/bin
	I0723 15:04:40.911388 3437923 out.go:298] Setting JSON to false
	I0723 15:04:40.911459 3437923 mustload.go:65] Loading cluster: multinode-524888
	I0723 15:04:40.911560 3437923 notify.go:220] Checking for updates...
	I0723 15:04:40.911957 3437923 config.go:182] Loaded profile config "multinode-524888": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 15:04:40.911992 3437923 status.go:255] checking status of multinode-524888 ...
	I0723 15:04:40.912551 3437923 cli_runner.go:164] Run: docker container inspect multinode-524888 --format={{.State.Status}}
	I0723 15:04:40.935233 3437923 status.go:330] multinode-524888 host status = "Running" (err=<nil>)
	I0723 15:04:40.935259 3437923 host.go:66] Checking if "multinode-524888" exists ...
	I0723 15:04:40.935655 3437923 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-524888
	I0723 15:04:40.952291 3437923 host.go:66] Checking if "multinode-524888" exists ...
	I0723 15:04:40.952610 3437923 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0723 15:04:40.952666 3437923 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-524888
	I0723 15:04:40.984649 3437923 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37287 SSHKeyPath:/home/jenkins/minikube-integration/19319-3317687/.minikube/machines/multinode-524888/id_rsa Username:docker}
	I0723 15:04:41.075904 3437923 ssh_runner.go:195] Run: systemctl --version
	I0723 15:04:41.080419 3437923 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 15:04:41.092774 3437923 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0723 15:04:41.147626 3437923 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-07-23 15:04:41.137494929 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1065-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.0]] Warnings:<nil>}}
	I0723 15:04:41.148245 3437923 kubeconfig.go:125] found "multinode-524888" server: "https://192.168.58.2:8443"
	I0723 15:04:41.148282 3437923 api_server.go:166] Checking apiserver status ...
	I0723 15:04:41.148330 3437923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0723 15:04:41.159620 3437923 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1434/cgroup
	I0723 15:04:41.168774 3437923 api_server.go:182] apiserver freezer: "4:freezer:/docker/67650f3be84492fb29a9df2314526ffbb945be382c15f09e603742f982aecb6b/crio/crio-b5767c56a81eaed4d2de716c03d5769d4174ef5c875c2f4b1f5c23969339b652"
	I0723 15:04:41.168862 3437923 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/67650f3be84492fb29a9df2314526ffbb945be382c15f09e603742f982aecb6b/crio/crio-b5767c56a81eaed4d2de716c03d5769d4174ef5c875c2f4b1f5c23969339b652/freezer.state
	I0723 15:04:41.177718 3437923 api_server.go:204] freezer state: "THAWED"
	I0723 15:04:41.177754 3437923 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0723 15:04:41.185845 3437923 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0723 15:04:41.185875 3437923 status.go:422] multinode-524888 apiserver status = Running (err=<nil>)
	I0723 15:04:41.185887 3437923 status.go:257] multinode-524888 status: &{Name:multinode-524888 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0723 15:04:41.185905 3437923 status.go:255] checking status of multinode-524888-m02 ...
	I0723 15:04:41.186213 3437923 cli_runner.go:164] Run: docker container inspect multinode-524888-m02 --format={{.State.Status}}
	I0723 15:04:41.202809 3437923 status.go:330] multinode-524888-m02 host status = "Running" (err=<nil>)
	I0723 15:04:41.202834 3437923 host.go:66] Checking if "multinode-524888-m02" exists ...
	I0723 15:04:41.203131 3437923 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-524888-m02
	I0723 15:04:41.219815 3437923 host.go:66] Checking if "multinode-524888-m02" exists ...
	I0723 15:04:41.220165 3437923 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0723 15:04:41.220214 3437923 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-524888-m02
	I0723 15:04:41.236716 3437923 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37292 SSHKeyPath:/home/jenkins/minikube-integration/19319-3317687/.minikube/machines/multinode-524888-m02/id_rsa Username:docker}
	I0723 15:04:41.324200 3437923 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0723 15:04:41.338078 3437923 status.go:257] multinode-524888-m02 status: &{Name:multinode-524888-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0723 15:04:41.338127 3437923 status.go:255] checking status of multinode-524888-m03 ...
	I0723 15:04:41.338458 3437923 cli_runner.go:164] Run: docker container inspect multinode-524888-m03 --format={{.State.Status}}
	I0723 15:04:41.354874 3437923 status.go:330] multinode-524888-m03 host status = "Stopped" (err=<nil>)
	I0723 15:04:41.354896 3437923 status.go:343] host is not running, skipping remaining checks
	I0723 15:04:41.354904 3437923 status.go:257] multinode-524888-m03 status: &{Name:multinode-524888-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.23s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524888 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-524888 node start m03 -v=7 --alsologtostderr: (9.131496091s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524888 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.88s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (88.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-524888
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-524888
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-524888: (24.841847169s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-524888 --wait=true -v=8 --alsologtostderr
E0723 15:06:10.026166 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/client.crt: no such file or directory
E0723 15:06:17.136309 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/functional-054469/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-524888 --wait=true -v=8 --alsologtostderr: (1m3.135709488s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-524888
--- PASS: TestMultiNode/serial/RestartKeepsNodes (88.11s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524888 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-524888 node delete m03: (4.570824446s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524888 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.22s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524888 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-524888 stop: (23.673278683s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524888 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-524888 status: exit status 7 (86.627501ms)

                                                
                                                
-- stdout --
	multinode-524888
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-524888-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524888 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-524888 status --alsologtostderr: exit status 7 (92.974396ms)

                                                
                                                
-- stdout --
	multinode-524888
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-524888-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 15:06:48.378622 3445414 out.go:291] Setting OutFile to fd 1 ...
	I0723 15:06:48.378771 3445414 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 15:06:48.378782 3445414 out.go:304] Setting ErrFile to fd 2...
	I0723 15:06:48.378787 3445414 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 15:06:48.379030 3445414 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19319-3317687/.minikube/bin
	I0723 15:06:48.379255 3445414 out.go:298] Setting JSON to false
	I0723 15:06:48.379296 3445414 mustload.go:65] Loading cluster: multinode-524888
	I0723 15:06:48.379411 3445414 notify.go:220] Checking for updates...
	I0723 15:06:48.379685 3445414 config.go:182] Loaded profile config "multinode-524888": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0723 15:06:48.379695 3445414 status.go:255] checking status of multinode-524888 ...
	I0723 15:06:48.380255 3445414 cli_runner.go:164] Run: docker container inspect multinode-524888 --format={{.State.Status}}
	I0723 15:06:48.398972 3445414 status.go:330] multinode-524888 host status = "Stopped" (err=<nil>)
	I0723 15:06:48.398994 3445414 status.go:343] host is not running, skipping remaining checks
	I0723 15:06:48.399003 3445414 status.go:257] multinode-524888 status: &{Name:multinode-524888 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0723 15:06:48.399049 3445414 status.go:255] checking status of multinode-524888-m02 ...
	I0723 15:06:48.399349 3445414 cli_runner.go:164] Run: docker container inspect multinode-524888-m02 --format={{.State.Status}}
	I0723 15:06:48.420090 3445414 status.go:330] multinode-524888-m02 host status = "Stopped" (err=<nil>)
	I0723 15:06:48.420115 3445414 status.go:343] host is not running, skipping remaining checks
	I0723 15:06:48.420123 3445414 status.go:257] multinode-524888-m02 status: &{Name:multinode-524888-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.85s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (57.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-524888 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-524888 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (56.681563914s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-524888 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (57.37s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (35.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-524888
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-524888-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-524888-m02 --driver=docker  --container-runtime=crio: exit status 14 (87.590777ms)

                                                
                                                
-- stdout --
	* [multinode-524888-m02] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19319
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19319-3317687/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19319-3317687/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-524888-m02' is duplicated with machine name 'multinode-524888-m02' in profile 'multinode-524888'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-524888-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-524888-m03 --driver=docker  --container-runtime=crio: (32.961874346s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-524888
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-524888: exit status 80 (308.035727ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-524888 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-524888-m03 already exists in multinode-524888-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_3.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-524888-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-524888-m03: (1.901236641s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (35.31s)

                                                
                                    
x
+
TestPreload (130.81s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-925036 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-925036 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m39.449585923s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-925036 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-925036 image pull gcr.io/k8s-minikube/busybox: (1.73061416s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-925036
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-925036: (5.779205214s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-925036 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-925036 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (21.170774126s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-925036 image list
helpers_test.go:175: Cleaning up "test-preload-925036" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-925036
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-925036: (2.401306509s)
--- PASS: TestPreload (130.81s)

                                                
                                    
x
+
TestScheduledStopUnix (106.55s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-045914 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-045914 --memory=2048 --driver=docker  --container-runtime=crio: (30.787767332s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-045914 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-045914 -n scheduled-stop-045914
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-045914 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-045914 --cancel-scheduled
E0723 15:11:10.025904 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/client.crt: no such file or directory
E0723 15:11:17.136181 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/functional-054469/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-045914 -n scheduled-stop-045914
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-045914
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-045914 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-045914
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-045914: exit status 7 (75.365094ms)

                                                
                                                
-- stdout --
	scheduled-stop-045914
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-045914 -n scheduled-stop-045914
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-045914 -n scheduled-stop-045914: exit status 7 (66.003099ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-045914" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-045914
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-045914: (4.248427764s)
--- PASS: TestScheduledStopUnix (106.55s)

                                                
                                    
x
+
TestInsufficientStorage (10.96s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-028797 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-028797 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (8.504690856s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"227cd568-c185-4809-b69f-33342c79912e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-028797] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"db308216-dc33-4876-a586-dd18c24d451c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19319"}}
	{"specversion":"1.0","id":"78192a71-2c0e-41d9-9068-e1f3d7fe2387","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a1b3a9d9-48cf-430e-ac8d-145671254f0d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19319-3317687/kubeconfig"}}
	{"specversion":"1.0","id":"72577eec-ca0f-4db6-9776-36490c51f041","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19319-3317687/.minikube"}}
	{"specversion":"1.0","id":"0634e47b-1f99-45cc-8974-8935577b57cd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"be6786df-d1b4-4c10-8bc8-f32a6b924c32","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"77581c49-6213-4c96-8672-ef840e182cb5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"b389a740-a5c9-4a14-9daa-582d805fb86c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"174e00fa-a598-4558-ba52-43471c95d347","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"fc03d3e5-c7de-48fc-81bd-4522fadd1ecf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"4e30903d-33d7-40a2-96b4-6d3443925060","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-028797\" primary control-plane node in \"insufficient-storage-028797\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"4225b574-acd2-49ac-9aee-c5318a29428d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.44-1721687125-19319 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"996a664b-686d-4342-8726-8481724b555f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"3e7e38f8-6274-47cd-8355-046e5c7c925d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-028797 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-028797 --output=json --layout=cluster: exit status 7 (289.242598ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-028797","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-028797","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0723 15:12:31.279204 3463196 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-028797" does not appear in /home/jenkins/minikube-integration/19319-3317687/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-028797 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-028797 --output=json --layout=cluster: exit status 7 (278.812141ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-028797","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-028797","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0723 15:12:31.559278 3463259 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-028797" does not appear in /home/jenkins/minikube-integration/19319-3317687/kubeconfig
	E0723 15:12:31.569637 3463259 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/insufficient-storage-028797/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-028797" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-028797
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-028797: (1.883706808s)
--- PASS: TestInsufficientStorage (10.96s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (78.79s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3189882879 start -p running-upgrade-669355 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3189882879 start -p running-upgrade-669355 --memory=2200 --vm-driver=docker  --container-runtime=crio: (50.497328231s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-669355 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0723 15:19:20.181228 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/functional-054469/client.crt: no such file or directory
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-669355 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (24.287099404s)
helpers_test.go:175: Cleaning up "running-upgrade-669355" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-669355
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-669355: (2.812010093s)
--- PASS: TestRunningBinaryUpgrade (78.79s)

                                                
                                    
x
+
TestKubernetesUpgrade (385.25s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-918736 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-918736 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m10.481392057s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-918736
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-918736: (1.20968877s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-918736 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-918736 status --format={{.Host}}: exit status 7 (68.933534ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-918736 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0723 15:16:10.026285 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/client.crt: no such file or directory
E0723 15:16:17.136146 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/functional-054469/client.crt: no such file or directory
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-918736 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m37.93583788s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-918736 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-918736 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-918736 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (115.440566ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-918736] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19319
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19319-3317687/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19319-3317687/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0-beta.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-918736
	    minikube start -p kubernetes-upgrade-918736 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9187362 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-918736 --kubernetes-version=v1.31.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-918736 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-918736 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (33.036726639s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-918736" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-918736
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-918736: (2.27813076s)
--- PASS: TestKubernetesUpgrade (385.25s)

                                                
                                    
x
+
TestMissingContainerUpgrade (146.11s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.1043488853 start -p missing-upgrade-018960 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.1043488853 start -p missing-upgrade-018960 --memory=2200 --driver=docker  --container-runtime=crio: (1m12.714431874s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-018960
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-018960: (10.465135322s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-018960
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-018960 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-018960 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m0.172293657s)
helpers_test.go:175: Cleaning up "missing-upgrade-018960" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-018960
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-018960: (2.005704717s)
--- PASS: TestMissingContainerUpgrade (146.11s)

                                                
                                    
x
+
TestPause/serial/Start (73.48s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-864402 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-864402 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m13.483544446s)
--- PASS: TestPause/serial/Start (73.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-231608 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-231608 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (95.486476ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-231608] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19319
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19319-3317687/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19319-3317687/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (46.66s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-231608 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-231608 --driver=docker  --container-runtime=crio: (46.305871838s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-231608 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (46.66s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (16.77s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-231608 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-231608 --no-kubernetes --driver=docker  --container-runtime=crio: (14.524221995s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-231608 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-231608 status -o json: exit status 2 (294.1615ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-231608","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-231608
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-231608: (1.948025049s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (16.77s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-231608 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-231608 --no-kubernetes --driver=docker  --container-runtime=crio: (6.10921387s)
--- PASS: TestNoKubernetes/serial/Start (6.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-231608 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-231608 "sudo systemctl is-active --quiet service kubelet": exit status 1 (278.713099ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.03s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-231608
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-231608: (1.242793518s)
--- PASS: TestNoKubernetes/serial/Stop (1.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-231608 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-231608 --driver=docker  --container-runtime=crio: (6.754840123s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-231608 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-231608 "sudo systemctl is-active --quiet service kubelet": exit status 1 (242.809587ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.24s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.21s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.21s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (101.8s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3324705535 start -p stopped-upgrade-700384 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3324705535 start -p stopped-upgrade-700384 --memory=2200 --vm-driver=docker  --container-runtime=crio: (52.272034472s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3324705535 -p stopped-upgrade-700384 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3324705535 -p stopped-upgrade-700384 stop: (2.547791848s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-700384 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-700384 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (46.980229163s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (101.80s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.39s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-700384
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-700384: (1.393239202s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-727446 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-727446 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (244.568784ms)

                                                
                                                
-- stdout --
	* [false-727446] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19319
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19319-3317687/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19319-3317687/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0723 15:20:12.488308 3502493 out.go:291] Setting OutFile to fd 1 ...
	I0723 15:20:12.488542 3502493 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 15:20:12.488564 3502493 out.go:304] Setting ErrFile to fd 2...
	I0723 15:20:12.488584 3502493 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0723 15:20:12.488834 3502493 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19319-3317687/.minikube/bin
	I0723 15:20:12.489252 3502493 out.go:298] Setting JSON to false
	I0723 15:20:12.490214 3502493 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":86559,"bootTime":1721661454,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0723 15:20:12.490312 3502493 start.go:139] virtualization:  
	I0723 15:20:12.493400 3502493 out.go:177] * [false-727446] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0723 15:20:12.498356 3502493 out.go:177]   - MINIKUBE_LOCATION=19319
	I0723 15:20:12.498428 3502493 notify.go:220] Checking for updates...
	I0723 15:20:12.502623 3502493 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0723 15:20:12.504871 3502493 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19319-3317687/kubeconfig
	I0723 15:20:12.507119 3502493 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19319-3317687/.minikube
	I0723 15:20:12.509583 3502493 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0723 15:20:12.511881 3502493 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0723 15:20:12.514635 3502493 config.go:182] Loaded profile config "kubernetes-upgrade-918736": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0723 15:20:12.514756 3502493 driver.go:392] Setting default libvirt URI to qemu:///system
	I0723 15:20:12.559104 3502493 docker.go:123] docker version: linux-27.1.0:Docker Engine - Community
	I0723 15:20:12.559214 3502493 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0723 15:20:12.637989 3502493 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2024-07-23 15:20:12.625596517 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1065-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.0]] Warnings:<nil>}}
	I0723 15:20:12.638089 3502493 docker.go:307] overlay module found
	I0723 15:20:12.640117 3502493 out.go:177] * Using the docker driver based on user configuration
	I0723 15:20:12.641823 3502493 start.go:297] selected driver: docker
	I0723 15:20:12.641839 3502493 start.go:901] validating driver "docker" against <nil>
	I0723 15:20:12.641860 3502493 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0723 15:20:12.644183 3502493 out.go:177] 
	W0723 15:20:12.646734 3502493 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0723 15:20:12.648541 3502493 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-727446 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-727446

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-727446

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-727446

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-727446

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-727446

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-727446

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-727446

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-727446

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-727446

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-727446

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-727446"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-727446"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-727446"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-727446

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-727446"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-727446"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-727446" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-727446" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-727446" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-727446" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-727446" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-727446" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-727446" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-727446" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-727446"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-727446"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-727446"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-727446"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-727446"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-727446" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-727446" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-727446" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-727446"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-727446"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-727446"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-727446"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-727446"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19319-3317687/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 23 Jul 2024 15:20:13 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-918736
contexts:
- context:
cluster: kubernetes-upgrade-918736
extensions:
- extension:
last-update: Tue, 23 Jul 2024 15:20:13 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: kubernetes-upgrade-918736
name: kubernetes-upgrade-918736
current-context: kubernetes-upgrade-918736
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-918736
user:
client-certificate: /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/kubernetes-upgrade-918736/client.crt
client-key: /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/kubernetes-upgrade-918736/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-727446

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-727446"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-727446"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-727446"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-727446"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-727446"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-727446"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-727446"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-727446"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-727446"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-727446"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-727446"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-727446"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-727446"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-727446"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-727446"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-727446"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-727446"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-727446"

                                                
                                                
----------------------- debugLogs end: false-727446 [took: 3.64767927s] --------------------------------
helpers_test.go:175: Cleaning up "false-727446" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-727446
--- PASS: TestNetworkPlugins/group/false (4.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (177.85s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-027400 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-027400 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m57.847589284s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (177.85s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.63s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-027400 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e6174286-1518-4310-8a16-97aed9cf9d0a] Pending
helpers_test.go:344: "busybox" [e6174286-1518-4310-8a16-97aed9cf9d0a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e6174286-1518-4310-8a16-97aed9cf9d0a] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.005187546s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-027400 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.63s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-027400 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-027400 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.289315291s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-027400 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.46s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (68.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-809063 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-809063 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (1m8.399596153s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (68.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-027400 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-027400 --alsologtostderr -v=3: (13.263001916s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-027400 -n old-k8s-version-027400
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-027400 -n old-k8s-version-027400: exit status 7 (79.643543ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-027400 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (153.65s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-027400 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-027400 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m33.280928289s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-027400 -n old-k8s-version-027400
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (153.65s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-809063 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0f3d2d20-83ba-47fb-bf8b-644e052a16fb] Pending
helpers_test.go:344: "busybox" [0f3d2d20-83ba-47fb-bf8b-644e052a16fb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [0f3d2d20-83ba-47fb-bf8b-644e052a16fb] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.005163562s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-809063 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-809063 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-809063 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-809063 --alsologtostderr -v=3
E0723 15:26:10.026218 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/client.crt: no such file or directory
E0723 15:26:17.136826 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/functional-054469/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-809063 --alsologtostderr -v=3: (11.96709099s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-809063 -n no-preload-809063
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-809063 -n no-preload-809063: exit status 7 (68.375721ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-809063 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (280.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-809063 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-809063 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (4m39.86678608s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-809063 -n no-preload-809063
E0723 15:31:00.741457 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/old-k8s-version-027400/client.crt: no such file or directory
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (280.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-nhdmg" [a32d4c28-e2be-4e05-8df0-b389e4fea59d] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00384442s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-nhdmg" [a32d4c28-e2be-4e05-8df0-b389e4fea59d] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004344467s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-027400 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-027400 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240719-e7903573
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-027400 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-027400 -n old-k8s-version-027400
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-027400 -n old-k8s-version-027400: exit status 2 (341.560806ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-027400 -n old-k8s-version-027400
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-027400 -n old-k8s-version-027400: exit status 2 (307.176391ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-027400 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-027400 -n old-k8s-version-027400
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-027400 -n old-k8s-version-027400
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.92s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (59.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-915824 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-915824 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.3: (59.206335865s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (59.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-915824 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7d95a33f-a52b-44a7-870f-4e36b549c8cc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [7d95a33f-a52b-44a7-870f-4e36b549c8cc] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.003624413s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-915824 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-915824 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-915824 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-915824 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-915824 --alsologtostderr -v=3: (11.972728741s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-915824 -n embed-certs-915824
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-915824 -n embed-certs-915824: exit status 7 (67.610972ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-915824 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (266.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-915824 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.3
E0723 15:29:38.816977 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/old-k8s-version-027400/client.crt: no such file or directory
E0723 15:29:38.822269 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/old-k8s-version-027400/client.crt: no such file or directory
E0723 15:29:38.832573 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/old-k8s-version-027400/client.crt: no such file or directory
E0723 15:29:38.853133 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/old-k8s-version-027400/client.crt: no such file or directory
E0723 15:29:38.893434 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/old-k8s-version-027400/client.crt: no such file or directory
E0723 15:29:38.973750 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/old-k8s-version-027400/client.crt: no such file or directory
E0723 15:29:39.134219 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/old-k8s-version-027400/client.crt: no such file or directory
E0723 15:29:39.456326 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/old-k8s-version-027400/client.crt: no such file or directory
E0723 15:29:40.096922 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/old-k8s-version-027400/client.crt: no such file or directory
E0723 15:29:41.377685 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/old-k8s-version-027400/client.crt: no such file or directory
E0723 15:29:43.938218 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/old-k8s-version-027400/client.crt: no such file or directory
E0723 15:29:49.058410 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/old-k8s-version-027400/client.crt: no such file or directory
E0723 15:29:59.298597 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/old-k8s-version-027400/client.crt: no such file or directory
E0723 15:30:19.780542 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/old-k8s-version-027400/client.crt: no such file or directory
E0723 15:30:53.073531 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-915824 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.3: (4m25.906633065s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-915824 -n embed-certs-915824
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (266.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5cc9f66cf4-qhsx5" [f0b74172-6100-4080-9cb1-1e2017c90051] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003511451s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5cc9f66cf4-qhsx5" [f0b74172-6100-4080-9cb1-1e2017c90051] Running
E0723 15:31:10.026025 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004044376s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-809063 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-809063 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240719-e7903573
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-809063 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-809063 -n no-preload-809063
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-809063 -n no-preload-809063: exit status 2 (321.836947ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-809063 -n no-preload-809063
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-809063 -n no-preload-809063: exit status 2 (379.267499ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-809063 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-809063 -n no-preload-809063
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-809063 -n no-preload-809063
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (58.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-223097 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-223097 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.3: (58.279270507s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (58.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-223097 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [47522eaf-cf4c-415a-8732-c269fd8d1219] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [47522eaf-cf4c-415a-8732-c269fd8d1219] Running
E0723 15:32:22.661710 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/old-k8s-version-027400/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003859171s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-223097 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-223097 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-223097 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.002080214s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-223097 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-223097 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-223097 --alsologtostderr -v=3: (12.086495286s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-223097 -n default-k8s-diff-port-223097
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-223097 -n default-k8s-diff-port-223097: exit status 7 (81.155789ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-223097 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (268.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-223097 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-223097 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.3: (4m27.739703748s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-223097 -n default-k8s-diff-port-223097
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (268.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-rvk9j" [7c611ada-2e6c-4e56-a6f0-2a90d0294ce6] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003890485s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-rvk9j" [7c611ada-2e6c-4e56-a6f0-2a90d0294ce6] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003622538s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-915824 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-915824 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240719-e7903573
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240715-585640e9
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-915824 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-915824 -n embed-certs-915824
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-915824 -n embed-certs-915824: exit status 2 (330.453247ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-915824 -n embed-certs-915824
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-915824 -n embed-certs-915824: exit status 2 (325.671344ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-915824 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-915824 -n embed-certs-915824
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-915824 -n embed-certs-915824
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.99s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (35.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-453130 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-453130 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (35.29617582s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (35.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-453130 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-453130 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.282946028s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-453130 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-453130 --alsologtostderr -v=3: (1.282233045s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-453130 -n newest-cni-453130
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-453130 -n newest-cni-453130: exit status 7 (73.397863ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-453130 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (15.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-453130 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
E0723 15:34:38.817668 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/old-k8s-version-027400/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-453130 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (14.583494582s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-453130 -n newest-cni-453130
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (15.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-453130 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240715-f6ad1f6e
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-453130 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p newest-cni-453130 --alsologtostderr -v=1: (1.199882126s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-453130 -n newest-cni-453130
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-453130 -n newest-cni-453130: exit status 2 (362.159898ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-453130 -n newest-cni-453130
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-453130 -n newest-cni-453130: exit status 2 (328.268709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-453130 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-453130 -n newest-cni-453130
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-453130 -n newest-cni-453130
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (61.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-727446 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E0723 15:35:06.501930 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/old-k8s-version-027400/client.crt: no such file or directory
E0723 15:35:58.154746 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/no-preload-809063/client.crt: no such file or directory
E0723 15:35:58.160335 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/no-preload-809063/client.crt: no such file or directory
E0723 15:35:58.170659 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/no-preload-809063/client.crt: no such file or directory
E0723 15:35:58.191025 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/no-preload-809063/client.crt: no such file or directory
E0723 15:35:58.231334 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/no-preload-809063/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-727446 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m1.061968801s)
--- PASS: TestNetworkPlugins/group/auto/Start (61.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-727446 "pgrep -a kubelet"
E0723 15:35:58.312221 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/no-preload-809063/client.crt: no such file or directory
E0723 15:35:58.472611 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/no-preload-809063/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-727446 replace --force -f testdata/netcat-deployment.yaml
E0723 15:35:58.794028 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/no-preload-809063/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-56vfv" [2036e4d1-fa91-41c3-a05c-39bb01886323] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0723 15:35:59.434437 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/no-preload-809063/client.crt: no such file or directory
E0723 15:36:00.181788 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/functional-054469/client.crt: no such file or directory
E0723 15:36:00.714702 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/no-preload-809063/client.crt: no such file or directory
E0723 15:36:03.277210 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/no-preload-809063/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-56vfv" [2036e4d1-fa91-41c3-a05c-39bb01886323] Running
E0723 15:36:08.397713 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/no-preload-809063/client.crt: no such file or directory
E0723 15:36:10.025947 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/addons-140056/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.004517553s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-727446 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-727446 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-727446 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (60.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-727446 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E0723 15:36:39.118600 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/no-preload-809063/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-727446 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m0.536878981s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (60.54s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-68rgq" [90adaf79-4553-4164-aacc-7d4ee45ba187] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004183298s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-68rgq" [90adaf79-4553-4164-aacc-7d4ee45ba187] Running
E0723 15:37:20.079785 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/no-preload-809063/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004149141s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-223097 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-223097 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240719-e7903573
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240715-585640e9
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-223097 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-223097 -n default-k8s-diff-port-223097
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-223097 -n default-k8s-diff-port-223097: exit status 2 (354.614168ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-223097 -n default-k8s-diff-port-223097
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-223097 -n default-k8s-diff-port-223097: exit status 2 (303.211108ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-223097 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-223097 -n default-k8s-diff-port-223097
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-223097 -n default-k8s-diff-port-223097
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.11s)
E0723 15:42:17.715754 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/default-k8s-diff-port-223097/client.crt: no such file or directory
E0723 15:42:17.721101 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/default-k8s-diff-port-223097/client.crt: no such file or directory
E0723 15:42:17.731429 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/default-k8s-diff-port-223097/client.crt: no such file or directory
E0723 15:42:17.751742 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/default-k8s-diff-port-223097/client.crt: no such file or directory
E0723 15:42:17.792031 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/default-k8s-diff-port-223097/client.crt: no such file or directory
E0723 15:42:17.872316 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/default-k8s-diff-port-223097/client.crt: no such file or directory
E0723 15:42:18.032895 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/default-k8s-diff-port-223097/client.crt: no such file or directory
E0723 15:42:18.353569 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/default-k8s-diff-port-223097/client.crt: no such file or directory
E0723 15:42:18.994311 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/default-k8s-diff-port-223097/client.crt: no such file or directory
E0723 15:42:20.275472 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/default-k8s-diff-port-223097/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (75.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-727446 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-727446 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m15.130205637s)
--- PASS: TestNetworkPlugins/group/calico/Start (75.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-6lchm" [af1eea8c-e2b0-4e9e-b38e-51b307199e7d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004814966s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-727446 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (14.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-727446 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-q4jwt" [94c102b3-eafe-4f8f-a83e-4b230d19fc07] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-q4jwt" [94c102b3-eafe-4f8f-a83e-4b230d19fc07] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 14.003677838s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (14.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-727446 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-727446 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-727446 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (72.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-727446 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-727446 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m12.837660548s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (72.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-4ld9f" [a722fd13-d3bd-44ed-b571-61d4e8a37340] Running
E0723 15:38:42.002386 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/no-preload-809063/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005012675s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-727446 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-727446 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-hgjv9" [322a23e5-88cb-4cd0-8b55-0dab56c25d47] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-hgjv9" [322a23e5-88cb-4cd0-8b55-0dab56c25d47] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004183908s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-727446 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-727446 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-727446 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (94.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-727446 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-727446 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m34.032826548s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (94.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-727446 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-727446 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-s6wl4" [26a44ec5-df56-4e6c-bb78-f6cb7b71f14b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-s6wl4" [26a44ec5-df56-4e6c-bb78-f6cb7b71f14b] Running
E0723 15:39:38.817081 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/old-k8s-version-027400/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.003896676s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-727446 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-727446 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-727446 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (65.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-727446 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-727446 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m5.765622515s)
--- PASS: TestNetworkPlugins/group/flannel/Start (65.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-727446 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-727446 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-h562w" [10303caa-ced3-46a4-8782-2f7c6998ef81] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0723 15:40:58.153956 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/no-preload-809063/client.crt: no such file or directory
E0723 15:40:58.841638 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/auto-727446/client.crt: no such file or directory
E0723 15:40:58.847106 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/auto-727446/client.crt: no such file or directory
E0723 15:40:58.857622 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/auto-727446/client.crt: no such file or directory
E0723 15:40:58.878226 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/auto-727446/client.crt: no such file or directory
E0723 15:40:58.918829 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/auto-727446/client.crt: no such file or directory
E0723 15:40:58.999757 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/auto-727446/client.crt: no such file or directory
E0723 15:40:59.160213 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/auto-727446/client.crt: no such file or directory
E0723 15:40:59.480402 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/auto-727446/client.crt: no such file or directory
E0723 15:41:00.130936 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/auto-727446/client.crt: no such file or directory
E0723 15:41:01.411707 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/auto-727446/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-h562w" [10303caa-ced3-46a4-8782-2f7c6998ef81] Running
E0723 15:41:03.972559 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/auto-727446/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.004282969s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-727446 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-727446 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-727446 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-lfmlh" [ba6d05f8-1a31-44a9-a244-6727978e0a90] Running
E0723 15:41:17.136108 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/functional-054469/client.crt: no such file or directory
E0723 15:41:19.333340 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/auto-727446/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004702034s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-727446 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-727446 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-7l7rs" [c1971bba-d635-488c-b732-fa137526e66d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0723 15:41:25.843414 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/no-preload-809063/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-7l7rs" [c1971bba-d635-488c-b732-fa137526e66d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.004402203s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (51.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-727446 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-727446 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (51.407237293s)
--- PASS: TestNetworkPlugins/group/bridge/Start (51.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-727446 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-727446 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-727446 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-727446 "pgrep -a kubelet"
E0723 15:42:20.774719 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/auto-727446/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-727446 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-9qfhx" [11bb7d78-4d6e-47ae-b7ee-d1905566c555] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0723 15:42:22.835954 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/default-k8s-diff-port-223097/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-9qfhx" [11bb7d78-4d6e-47ae-b7ee-d1905566c555] Running
E0723 15:42:27.956636 3323080 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/default-k8s-diff-port-223097/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003515429s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-727446 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-727446 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-727446 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    

Test skip (33/330)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.54s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-248386 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-248386" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-248386
--- SKIP: TestDownloadOnlyKic (0.54s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Volcano
=== PAUSE TestAddons/parallel/Volcano

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Volcano
addons_test.go:871: skipping: crio not supported
--- SKIP: TestAddons/parallel/Volcano (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-684104" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-684104
--- SKIP: TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-727446 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-727446

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-727446

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-727446

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-727446

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-727446

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-727446

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-727446

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-727446

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-727446

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-727446

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-727446"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-727446"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-727446"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-727446

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-727446"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-727446"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-727446" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-727446" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-727446" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-727446" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-727446" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-727446" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-727446" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-727446" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-727446"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-727446"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-727446"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-727446"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-727446"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-727446" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-727446" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-727446" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-727446"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-727446"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-727446"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-727446"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-727446"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19319-3317687/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 23 Jul 2024 15:20:05 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-918736
contexts:
- context:
cluster: kubernetes-upgrade-918736
extensions:
- extension:
last-update: Tue, 23 Jul 2024 15:20:05 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: kubernetes-upgrade-918736
name: kubernetes-upgrade-918736
current-context: kubernetes-upgrade-918736
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-918736
user:
client-certificate: /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/kubernetes-upgrade-918736/client.crt
client-key: /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/kubernetes-upgrade-918736/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-727446

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-727446"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-727446"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-727446"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-727446"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-727446"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-727446"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-727446"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-727446"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-727446"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-727446"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-727446"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-727446"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-727446"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-727446"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-727446"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-727446"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-727446"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-727446"

                                                
                                                
----------------------- debugLogs end: kubenet-727446 [took: 4.881096322s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-727446" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-727446
--- SKIP: TestNetworkPlugins/group/kubenet (5.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-727446 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-727446

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-727446

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-727446

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-727446

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-727446

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-727446

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-727446

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-727446

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-727446

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-727446

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-727446"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-727446"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-727446"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-727446

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-727446"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-727446"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-727446" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-727446" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-727446" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-727446" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-727446" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-727446" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-727446" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-727446" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-727446"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-727446"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-727446"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-727446"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-727446"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-727446

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-727446

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-727446" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-727446" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-727446

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-727446

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-727446" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-727446" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-727446" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-727446" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-727446" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-727446"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-727446"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-727446"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-727446"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-727446"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19319-3317687/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 23 Jul 2024 15:20:13 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-918736
contexts:
- context:
cluster: kubernetes-upgrade-918736
extensions:
- extension:
last-update: Tue, 23 Jul 2024 15:20:13 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: kubernetes-upgrade-918736
name: kubernetes-upgrade-918736
current-context: kubernetes-upgrade-918736
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-918736
user:
client-certificate: /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/kubernetes-upgrade-918736/client.crt
client-key: /home/jenkins/minikube-integration/19319-3317687/.minikube/profiles/kubernetes-upgrade-918736/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-727446

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-727446"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-727446"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-727446"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-727446"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-727446"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-727446"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-727446"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-727446"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-727446"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-727446"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-727446"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-727446"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-727446"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-727446"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-727446"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-727446"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-727446"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-727446" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-727446"

                                                
                                                
----------------------- debugLogs end: cilium-727446 [took: 3.737353383s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-727446" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-727446
--- SKIP: TestNetworkPlugins/group/cilium (3.88s)

                                                
                                    
Copied to clipboard