Test Report: Docker_Linux_crio_arm64 20083

                    
                      6c4fcf300662436f71bcf8696a35dd22d9fca43a:2024-12-12:37445
                    
                

Test fail (2/330)

Order failed test Duration
36 TestAddons/parallel/Ingress 153.33
38 TestAddons/parallel/MetricsServer 367.94
x
+
TestAddons/parallel/Ingress (153.33s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-680529 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-680529 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-680529 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [d6113dc4-e697-49af-83e3-24085d447e3a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [d6113dc4-e697-49af-83e3-24085d447e3a] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003749373s
I1211 23:59:11.673999  272599 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-680529 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-680529 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.270292468s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-680529 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-680529 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-680529
helpers_test.go:235: (dbg) docker inspect addons-680529:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1574e2ba69a221fa0bfdd7385787c117f5f9c4f65c607102e868f146d42ac6e2",
	        "Created": "2024-12-11T23:53:50.308547561Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 273855,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-12-11T23:53:50.457736916Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:02e8be8b1127faa30f09fff745d2a6d385248178d204468bf667a69a71dbf447",
	        "ResolvConfPath": "/var/lib/docker/containers/1574e2ba69a221fa0bfdd7385787c117f5f9c4f65c607102e868f146d42ac6e2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1574e2ba69a221fa0bfdd7385787c117f5f9c4f65c607102e868f146d42ac6e2/hostname",
	        "HostsPath": "/var/lib/docker/containers/1574e2ba69a221fa0bfdd7385787c117f5f9c4f65c607102e868f146d42ac6e2/hosts",
	        "LogPath": "/var/lib/docker/containers/1574e2ba69a221fa0bfdd7385787c117f5f9c4f65c607102e868f146d42ac6e2/1574e2ba69a221fa0bfdd7385787c117f5f9c4f65c607102e868f146d42ac6e2-json.log",
	        "Name": "/addons-680529",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-680529:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-680529",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a84df4a719abdcecfc7f87bac585e4d175f7e4e2636079a8f9517afb944c65a0-init/diff:/var/lib/docker/overlay2/cae28b97ef808ae95cc2fc3d05edfc376b87c790784199a6aea276c80f286d94/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a84df4a719abdcecfc7f87bac585e4d175f7e4e2636079a8f9517afb944c65a0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a84df4a719abdcecfc7f87bac585e4d175f7e4e2636079a8f9517afb944c65a0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a84df4a719abdcecfc7f87bac585e4d175f7e4e2636079a8f9517afb944c65a0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-680529",
	                "Source": "/var/lib/docker/volumes/addons-680529/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-680529",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-680529",
	                "name.minikube.sigs.k8s.io": "addons-680529",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "56251203e7d0287d933a0bbfa4ec2bb99d01ae9e5c606af8a1ed6fc050471037",
	            "SandboxKey": "/var/run/docker/netns/56251203e7d0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-680529": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "ddce59baa39155a23e7cd0bf9b9b67d093c265faf900566172f0f882de75a5c8",
	                    "EndpointID": "f4001ddeb82e66b6d7a075f2eb10cbff954e913e3c37fc7a40eaab0f61aa9735",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-680529",
	                        "1574e2ba69a2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-680529 -n addons-680529
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-680529 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-680529 logs -n 25: (1.47870223s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 11 Dec 24 23:53 UTC | 11 Dec 24 23:53 UTC |
	| delete  | -p download-only-242646              | download-only-242646   | jenkins | v1.34.0 | 11 Dec 24 23:53 UTC | 11 Dec 24 23:53 UTC |
	| start   | -o=json --download-only              | download-only-228158   | jenkins | v1.34.0 | 11 Dec 24 23:53 UTC |                     |
	|         | -p download-only-228158              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2         |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 11 Dec 24 23:53 UTC | 11 Dec 24 23:53 UTC |
	| delete  | -p download-only-228158              | download-only-228158   | jenkins | v1.34.0 | 11 Dec 24 23:53 UTC | 11 Dec 24 23:53 UTC |
	| delete  | -p download-only-242646              | download-only-242646   | jenkins | v1.34.0 | 11 Dec 24 23:53 UTC | 11 Dec 24 23:53 UTC |
	| delete  | -p download-only-228158              | download-only-228158   | jenkins | v1.34.0 | 11 Dec 24 23:53 UTC | 11 Dec 24 23:53 UTC |
	| start   | --download-only -p                   | download-docker-474606 | jenkins | v1.34.0 | 11 Dec 24 23:53 UTC |                     |
	|         | download-docker-474606               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p download-docker-474606            | download-docker-474606 | jenkins | v1.34.0 | 11 Dec 24 23:53 UTC | 11 Dec 24 23:53 UTC |
	| start   | --download-only -p                   | binary-mirror-562366   | jenkins | v1.34.0 | 11 Dec 24 23:53 UTC |                     |
	|         | binary-mirror-562366                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:39867               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-562366              | binary-mirror-562366   | jenkins | v1.34.0 | 11 Dec 24 23:53 UTC | 11 Dec 24 23:53 UTC |
	| addons  | enable dashboard -p                  | addons-680529          | jenkins | v1.34.0 | 11 Dec 24 23:53 UTC |                     |
	|         | addons-680529                        |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-680529          | jenkins | v1.34.0 | 11 Dec 24 23:53 UTC |                     |
	|         | addons-680529                        |                        |         |         |                     |                     |
	| start   | -p addons-680529 --wait=true         | addons-680529          | jenkins | v1.34.0 | 11 Dec 24 23:53 UTC | 11 Dec 24 23:57 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	| addons  | addons-680529 addons disable         | addons-680529          | jenkins | v1.34.0 | 11 Dec 24 23:57 UTC | 11 Dec 24 23:57 UTC |
	|         | volcano --alsologtostderr -v=1       |                        |         |         |                     |                     |
	| addons  | addons-680529 addons disable         | addons-680529          | jenkins | v1.34.0 | 11 Dec 24 23:57 UTC | 11 Dec 24 23:57 UTC |
	|         | gcp-auth --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-680529          | jenkins | v1.34.0 | 11 Dec 24 23:57 UTC | 11 Dec 24 23:57 UTC |
	|         | -p addons-680529                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-680529 addons disable         | addons-680529          | jenkins | v1.34.0 | 11 Dec 24 23:58 UTC | 11 Dec 24 23:58 UTC |
	|         | headlamp --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| ip      | addons-680529 ip                     | addons-680529          | jenkins | v1.34.0 | 11 Dec 24 23:58 UTC | 11 Dec 24 23:58 UTC |
	| addons  | addons-680529 addons disable         | addons-680529          | jenkins | v1.34.0 | 11 Dec 24 23:58 UTC | 11 Dec 24 23:58 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-680529 addons                 | addons-680529          | jenkins | v1.34.0 | 11 Dec 24 23:58 UTC | 11 Dec 24 23:58 UTC |
	|         | disable volumesnapshots              |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-680529 addons                 | addons-680529          | jenkins | v1.34.0 | 11 Dec 24 23:58 UTC | 11 Dec 24 23:58 UTC |
	|         | disable csi-hostpath-driver          |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-680529 addons                 | addons-680529          | jenkins | v1.34.0 | 11 Dec 24 23:58 UTC | 11 Dec 24 23:59 UTC |
	|         | disable inspektor-gadget             |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| ssh     | addons-680529 ssh curl -s            | addons-680529          | jenkins | v1.34.0 | 11 Dec 24 23:59 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:          |                        |         |         |                     |                     |
	|         | nginx.example.com'                   |                        |         |         |                     |                     |
	| ip      | addons-680529 ip                     | addons-680529          | jenkins | v1.34.0 | 12 Dec 24 00:01 UTC | 12 Dec 24 00:01 UTC |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/11 23:53:26
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.23.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1211 23:53:26.140694  273363 out.go:345] Setting OutFile to fd 1 ...
	I1211 23:53:26.140871  273363 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 23:53:26.140881  273363 out.go:358] Setting ErrFile to fd 2...
	I1211 23:53:26.140887  273363 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 23:53:26.141138  273363 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20083-267093/.minikube/bin
	I1211 23:53:26.141624  273363 out.go:352] Setting JSON to false
	I1211 23:53:26.142519  273363 start.go:129] hostinfo: {"hostname":"ip-172-31-24-2","uptime":5748,"bootTime":1733955459,"procs":162,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1072-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1211 23:53:26.142603  273363 start.go:139] virtualization:  
	I1211 23:53:26.144642  273363 out.go:177] * [addons-680529] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1211 23:53:26.145939  273363 out.go:177]   - MINIKUBE_LOCATION=20083
	I1211 23:53:26.146010  273363 notify.go:220] Checking for updates...
	I1211 23:53:26.148409  273363 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 23:53:26.149818  273363 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20083-267093/kubeconfig
	I1211 23:53:26.151440  273363 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20083-267093/.minikube
	I1211 23:53:26.152528  273363 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1211 23:53:26.153714  273363 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1211 23:53:26.155187  273363 driver.go:394] Setting default libvirt URI to qemu:///system
	I1211 23:53:26.176944  273363 docker.go:123] docker version: linux-27.4.0:Docker Engine - Community
	I1211 23:53:26.177079  273363 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1211 23:53:26.243886  273363 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-12-11 23:53:26.235020436 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge-n
f-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
	I1211 23:53:26.243994  273363 docker.go:318] overlay module found
	I1211 23:53:26.245328  273363 out.go:177] * Using the docker driver based on user configuration
	I1211 23:53:26.246391  273363 start.go:297] selected driver: docker
	I1211 23:53:26.246407  273363 start.go:901] validating driver "docker" against <nil>
	I1211 23:53:26.246420  273363 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1211 23:53:26.247127  273363 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1211 23:53:26.297176  273363 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-12-11 23:53:26.287858692 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge-n
f-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
	I1211 23:53:26.297406  273363 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1211 23:53:26.297635  273363 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1211 23:53:26.299324  273363 out.go:177] * Using Docker driver with root privileges
	I1211 23:53:26.300687  273363 cni.go:84] Creating CNI manager for ""
	I1211 23:53:26.300749  273363 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1211 23:53:26.300766  273363 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1211 23:53:26.300855  273363 start.go:340] cluster config:
	{Name:addons-680529 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-680529 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 23:53:26.302502  273363 out.go:177] * Starting "addons-680529" primary control-plane node in "addons-680529" cluster
	I1211 23:53:26.303640  273363 cache.go:121] Beginning downloading kic base image for docker with crio
	I1211 23:53:26.305136  273363 out.go:177] * Pulling base image v0.0.45-1733912881-20083 ...
	I1211 23:53:26.306304  273363 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1211 23:53:26.306361  273363 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20083-267093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-arm64.tar.lz4
	I1211 23:53:26.306373  273363 cache.go:56] Caching tarball of preloaded images
	I1211 23:53:26.306392  273363 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 in local docker daemon
	I1211 23:53:26.306466  273363 preload.go:172] Found /home/jenkins/minikube-integration/20083-267093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1211 23:53:26.306477  273363 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1211 23:53:26.306827  273363 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/config.json ...
	I1211 23:53:26.306857  273363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/config.json: {Name:mk469b90b54323209236f5351ccad5d417857cac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:53:26.321839  273363 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 to local cache
	I1211 23:53:26.321947  273363 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 in local cache directory
	I1211 23:53:26.321978  273363 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 in local cache directory, skipping pull
	I1211 23:53:26.321994  273363 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 exists in cache, skipping pull
	I1211 23:53:26.322003  273363 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 as a tarball
	I1211 23:53:26.322018  273363 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 from local cache
	I1211 23:53:43.522907  273363 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 from cached tarball
	I1211 23:53:43.522944  273363 cache.go:194] Successfully downloaded all kic artifacts
	I1211 23:53:43.522976  273363 start.go:360] acquireMachinesLock for addons-680529: {Name:mka66168fe56cbfe9ea230a9ab15a4bcc0bf82b3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 23:53:43.523101  273363 start.go:364] duration metric: took 107.633µs to acquireMachinesLock for "addons-680529"
	I1211 23:53:43.523128  273363 start.go:93] Provisioning new machine with config: &{Name:addons-680529 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-680529 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1211 23:53:43.523197  273363 start.go:125] createHost starting for "" (driver="docker")
	I1211 23:53:43.524914  273363 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1211 23:53:43.525167  273363 start.go:159] libmachine.API.Create for "addons-680529" (driver="docker")
	I1211 23:53:43.525209  273363 client.go:168] LocalClient.Create starting
	I1211 23:53:43.525332  273363 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20083-267093/.minikube/certs/ca.pem
	I1211 23:53:43.804383  273363 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20083-267093/.minikube/certs/cert.pem
	I1211 23:53:43.948495  273363 cli_runner.go:164] Run: docker network inspect addons-680529 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1211 23:53:43.963207  273363 cli_runner.go:211] docker network inspect addons-680529 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1211 23:53:43.963316  273363 network_create.go:284] running [docker network inspect addons-680529] to gather additional debugging logs...
	I1211 23:53:43.963338  273363 cli_runner.go:164] Run: docker network inspect addons-680529
	W1211 23:53:43.976941  273363 cli_runner.go:211] docker network inspect addons-680529 returned with exit code 1
	I1211 23:53:43.976972  273363 network_create.go:287] error running [docker network inspect addons-680529]: docker network inspect addons-680529: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-680529 not found
	I1211 23:53:43.976985  273363 network_create.go:289] output of [docker network inspect addons-680529]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-680529 not found
	
	** /stderr **
	I1211 23:53:43.977072  273363 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1211 23:53:43.993063  273363 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400198b000}
	I1211 23:53:43.993106  273363 network_create.go:124] attempt to create docker network addons-680529 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1211 23:53:43.993159  273363 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-680529 addons-680529
	I1211 23:53:44.066002  273363 network_create.go:108] docker network addons-680529 192.168.49.0/24 created
	I1211 23:53:44.066040  273363 kic.go:121] calculated static IP "192.168.49.2" for the "addons-680529" container
	I1211 23:53:44.066177  273363 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1211 23:53:44.082160  273363 cli_runner.go:164] Run: docker volume create addons-680529 --label name.minikube.sigs.k8s.io=addons-680529 --label created_by.minikube.sigs.k8s.io=true
	I1211 23:53:44.098843  273363 oci.go:103] Successfully created a docker volume addons-680529
	I1211 23:53:44.098942  273363 cli_runner.go:164] Run: docker run --rm --name addons-680529-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-680529 --entrypoint /usr/bin/test -v addons-680529:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 -d /var/lib
	I1211 23:53:46.176639  273363 cli_runner.go:217] Completed: docker run --rm --name addons-680529-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-680529 --entrypoint /usr/bin/test -v addons-680529:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 -d /var/lib: (2.077639218s)
	I1211 23:53:46.176672  273363 oci.go:107] Successfully prepared a docker volume addons-680529
	I1211 23:53:46.176710  273363 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1211 23:53:46.176735  273363 kic.go:194] Starting extracting preloaded images to volume ...
	I1211 23:53:46.176810  273363 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20083-267093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-680529:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 -I lz4 -xf /preloaded.tar -C /extractDir
	I1211 23:53:50.231601  273363 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20083-267093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-680529:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 -I lz4 -xf /preloaded.tar -C /extractDir: (4.054749684s)
	I1211 23:53:50.231633  273363 kic.go:203] duration metric: took 4.054894215s to extract preloaded images to volume ...
	W1211 23:53:50.231787  273363 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1211 23:53:50.231896  273363 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1211 23:53:50.293543  273363 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-680529 --name addons-680529 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-680529 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-680529 --network addons-680529 --ip 192.168.49.2 --volume addons-680529:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2
	I1211 23:53:50.638889  273363 cli_runner.go:164] Run: docker container inspect addons-680529 --format={{.State.Running}}
	I1211 23:53:50.661869  273363 cli_runner.go:164] Run: docker container inspect addons-680529 --format={{.State.Status}}
	I1211 23:53:50.687003  273363 cli_runner.go:164] Run: docker exec addons-680529 stat /var/lib/dpkg/alternatives/iptables
	I1211 23:53:50.738132  273363 oci.go:144] the created container "addons-680529" has a running status.
	I1211 23:53:50.738243  273363 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20083-267093/.minikube/machines/addons-680529/id_rsa...
	I1211 23:53:51.681528  273363 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20083-267093/.minikube/machines/addons-680529/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1211 23:53:51.707286  273363 cli_runner.go:164] Run: docker container inspect addons-680529 --format={{.State.Status}}
	I1211 23:53:51.730069  273363 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1211 23:53:51.730097  273363 kic_runner.go:114] Args: [docker exec --privileged addons-680529 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1211 23:53:51.777999  273363 cli_runner.go:164] Run: docker container inspect addons-680529 --format={{.State.Status}}
	I1211 23:53:51.798969  273363 machine.go:93] provisionDockerMachine start ...
	I1211 23:53:51.799060  273363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-680529
	I1211 23:53:51.820747  273363 main.go:141] libmachine: Using SSH client type: native
	I1211 23:53:51.821006  273363 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x416340] 0x418b80 <nil>  [] 0s} 127.0.0.1 33085 <nil> <nil>}
	I1211 23:53:51.821015  273363 main.go:141] libmachine: About to run SSH command:
	hostname
	I1211 23:53:51.953788  273363 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-680529
	
	I1211 23:53:51.953814  273363 ubuntu.go:169] provisioning hostname "addons-680529"
	I1211 23:53:51.953891  273363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-680529
	I1211 23:53:51.971604  273363 main.go:141] libmachine: Using SSH client type: native
	I1211 23:53:51.971862  273363 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x416340] 0x418b80 <nil>  [] 0s} 127.0.0.1 33085 <nil> <nil>}
	I1211 23:53:51.971881  273363 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-680529 && echo "addons-680529" | sudo tee /etc/hostname
	I1211 23:53:52.114005  273363 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-680529
	
	I1211 23:53:52.114206  273363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-680529
	I1211 23:53:52.132000  273363 main.go:141] libmachine: Using SSH client type: native
	I1211 23:53:52.132246  273363 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x416340] 0x418b80 <nil>  [] 0s} 127.0.0.1 33085 <nil> <nil>}
	I1211 23:53:52.132283  273363 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-680529' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-680529/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-680529' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1211 23:53:52.266078  273363 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1211 23:53:52.266114  273363 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20083-267093/.minikube CaCertPath:/home/jenkins/minikube-integration/20083-267093/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20083-267093/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20083-267093/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20083-267093/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20083-267093/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20083-267093/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20083-267093/.minikube}
	I1211 23:53:52.266160  273363 ubuntu.go:177] setting up certificates
	I1211 23:53:52.266173  273363 provision.go:84] configureAuth start
	I1211 23:53:52.266236  273363 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-680529
	I1211 23:53:52.282671  273363 provision.go:143] copyHostCerts
	I1211 23:53:52.282749  273363 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-267093/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20083-267093/.minikube/key.pem (1679 bytes)
	I1211 23:53:52.282866  273363 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-267093/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20083-267093/.minikube/ca.pem (1082 bytes)
	I1211 23:53:52.282929  273363 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-267093/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20083-267093/.minikube/cert.pem (1123 bytes)
	I1211 23:53:52.282978  273363 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20083-267093/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20083-267093/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20083-267093/.minikube/certs/ca-key.pem org=jenkins.addons-680529 san=[127.0.0.1 192.168.49.2 addons-680529 localhost minikube]
	I1211 23:53:52.513838  273363 provision.go:177] copyRemoteCerts
	I1211 23:53:52.513921  273363 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1211 23:53:52.513966  273363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-680529
	I1211 23:53:52.533714  273363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/20083-267093/.minikube/machines/addons-680529/id_rsa Username:docker}
	I1211 23:53:52.627497  273363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-267093/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1211 23:53:52.653369  273363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-267093/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1211 23:53:52.677218  273363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-267093/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1211 23:53:52.701776  273363 provision.go:87] duration metric: took 435.586406ms to configureAuth
	I1211 23:53:52.701853  273363 ubuntu.go:193] setting minikube options for container-runtime
	I1211 23:53:52.702065  273363 config.go:182] Loaded profile config "addons-680529": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1211 23:53:52.702211  273363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-680529
	I1211 23:53:52.719218  273363 main.go:141] libmachine: Using SSH client type: native
	I1211 23:53:52.719465  273363 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x416340] 0x418b80 <nil>  [] 0s} 127.0.0.1 33085 <nil> <nil>}
	I1211 23:53:52.719487  273363 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1211 23:53:52.951461  273363 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1211 23:53:52.951525  273363 machine.go:96] duration metric: took 1.152535409s to provisionDockerMachine
	I1211 23:53:52.951553  273363 client.go:171] duration metric: took 9.426336636s to LocalClient.Create
	I1211 23:53:52.951588  273363 start.go:167] duration metric: took 9.426422083s to libmachine.API.Create "addons-680529"
	I1211 23:53:52.951616  273363 start.go:293] postStartSetup for "addons-680529" (driver="docker")
	I1211 23:53:52.951657  273363 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1211 23:53:52.951787  273363 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1211 23:53:52.951861  273363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-680529
	I1211 23:53:52.968961  273363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/20083-267093/.minikube/machines/addons-680529/id_rsa Username:docker}
	I1211 23:53:53.063657  273363 ssh_runner.go:195] Run: cat /etc/os-release
	I1211 23:53:53.067080  273363 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1211 23:53:53.067163  273363 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1211 23:53:53.067199  273363 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1211 23:53:53.067213  273363 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1211 23:53:53.067226  273363 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-267093/.minikube/addons for local assets ...
	I1211 23:53:53.067294  273363 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-267093/.minikube/files for local assets ...
	I1211 23:53:53.067320  273363 start.go:296] duration metric: took 115.670105ms for postStartSetup
	I1211 23:53:53.067633  273363 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-680529
	I1211 23:53:53.086385  273363 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/config.json ...
	I1211 23:53:53.086779  273363 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1211 23:53:53.086835  273363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-680529
	I1211 23:53:53.103278  273363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/20083-267093/.minikube/machines/addons-680529/id_rsa Username:docker}
	I1211 23:53:53.195077  273363 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1211 23:53:53.199584  273363 start.go:128] duration metric: took 9.676367262s to createHost
	I1211 23:53:53.199611  273363 start.go:83] releasing machines lock for "addons-680529", held for 9.676499617s
	I1211 23:53:53.199680  273363 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-680529
	I1211 23:53:53.216859  273363 ssh_runner.go:195] Run: cat /version.json
	I1211 23:53:53.216930  273363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-680529
	I1211 23:53:53.217218  273363 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1211 23:53:53.217288  273363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-680529
	I1211 23:53:53.236442  273363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/20083-267093/.minikube/machines/addons-680529/id_rsa Username:docker}
	I1211 23:53:53.239362  273363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/20083-267093/.minikube/machines/addons-680529/id_rsa Username:docker}
	I1211 23:53:53.461052  273363 ssh_runner.go:195] Run: systemctl --version
	I1211 23:53:53.465350  273363 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1211 23:53:53.605737  273363 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1211 23:53:53.610034  273363 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1211 23:53:53.631015  273363 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1211 23:53:53.631097  273363 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1211 23:53:53.662332  273363 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1211 23:53:53.662400  273363 start.go:495] detecting cgroup driver to use...
	I1211 23:53:53.662450  273363 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1211 23:53:53.662525  273363 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1211 23:53:53.679214  273363 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1211 23:53:53.690683  273363 docker.go:217] disabling cri-docker service (if available) ...
	I1211 23:53:53.690752  273363 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1211 23:53:53.705298  273363 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1211 23:53:53.720122  273363 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1211 23:53:53.811749  273363 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1211 23:53:53.903591  273363 docker.go:233] disabling docker service ...
	I1211 23:53:53.903661  273363 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1211 23:53:53.923657  273363 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1211 23:53:53.935510  273363 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1211 23:53:54.028919  273363 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1211 23:53:54.124972  273363 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1211 23:53:54.135898  273363 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1211 23:53:54.151607  273363 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1211 23:53:54.151720  273363 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:53:54.161361  273363 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1211 23:53:54.161451  273363 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:53:54.171643  273363 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:53:54.181485  273363 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:53:54.191090  273363 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1211 23:53:54.199960  273363 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:53:54.209951  273363 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:53:54.225659  273363 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:53:54.235010  273363 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1211 23:53:54.243799  273363 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1211 23:53:54.252529  273363 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 23:53:54.329703  273363 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1211 23:53:54.433564  273363 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1211 23:53:54.433725  273363 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1211 23:53:54.438072  273363 start.go:563] Will wait 60s for crictl version
	I1211 23:53:54.438216  273363 ssh_runner.go:195] Run: which crictl
	I1211 23:53:54.441667  273363 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1211 23:53:54.478611  273363 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1211 23:53:54.478726  273363 ssh_runner.go:195] Run: crio --version
	I1211 23:53:54.515949  273363 ssh_runner.go:195] Run: crio --version
	I1211 23:53:54.557876  273363 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.24.6 ...
	I1211 23:53:54.560375  273363 cli_runner.go:164] Run: docker network inspect addons-680529 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1211 23:53:54.580924  273363 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1211 23:53:54.584432  273363 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1211 23:53:54.595231  273363 kubeadm.go:883] updating cluster {Name:addons-680529 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-680529 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1211 23:53:54.595357  273363 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1211 23:53:54.595418  273363 ssh_runner.go:195] Run: sudo crictl images --output json
	I1211 23:53:54.677508  273363 crio.go:514] all images are preloaded for cri-o runtime.
	I1211 23:53:54.677529  273363 crio.go:433] Images already preloaded, skipping extraction
	I1211 23:53:54.677585  273363 ssh_runner.go:195] Run: sudo crictl images --output json
	I1211 23:53:54.712881  273363 crio.go:514] all images are preloaded for cri-o runtime.
	I1211 23:53:54.712906  273363 cache_images.go:84] Images are preloaded, skipping loading
	I1211 23:53:54.712915  273363 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.2 crio true true} ...
	I1211 23:53:54.713011  273363 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-680529 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:addons-680529 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1211 23:53:54.713097  273363 ssh_runner.go:195] Run: crio config
	I1211 23:53:54.769241  273363 cni.go:84] Creating CNI manager for ""
	I1211 23:53:54.769265  273363 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1211 23:53:54.769275  273363 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1211 23:53:54.769327  273363 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-680529 NodeName:addons-680529 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1211 23:53:54.769484  273363 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-680529"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1211 23:53:54.769585  273363 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1211 23:53:54.778379  273363 binaries.go:44] Found k8s binaries, skipping transfer
	I1211 23:53:54.778503  273363 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1211 23:53:54.787651  273363 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1211 23:53:54.806004  273363 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1211 23:53:54.824088  273363 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2287 bytes)
	I1211 23:53:54.842000  273363 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1211 23:53:54.845520  273363 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1211 23:53:54.856161  273363 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 23:53:54.934881  273363 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1211 23:53:54.948546  273363 certs.go:68] Setting up /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529 for IP: 192.168.49.2
	I1211 23:53:54.948614  273363 certs.go:194] generating shared ca certs ...
	I1211 23:53:54.948644  273363 certs.go:226] acquiring lock for ca certs: {Name:mk75a7b7ee8a94f6f2a55504cc54c197a74cc120 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:53:54.948814  273363 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20083-267093/.minikube/ca.key
	I1211 23:53:55.384596  273363 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20083-267093/.minikube/ca.crt ...
	I1211 23:53:55.384625  273363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-267093/.minikube/ca.crt: {Name:mk3e52d092dcef5787bc435861f1608c2f947114 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:53:55.384845  273363 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20083-267093/.minikube/ca.key ...
	I1211 23:53:55.384861  273363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-267093/.minikube/ca.key: {Name:mk9187b90a55d2e1f4f24ea98738619dd0fa0832 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:53:55.384950  273363 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20083-267093/.minikube/proxy-client-ca.key
	I1211 23:53:55.792580  273363 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20083-267093/.minikube/proxy-client-ca.crt ...
	I1211 23:53:55.792613  273363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-267093/.minikube/proxy-client-ca.crt: {Name:mkf2a57bfa0628fdc29088ad9a2c197184da2ce7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:53:55.792796  273363 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20083-267093/.minikube/proxy-client-ca.key ...
	I1211 23:53:55.792809  273363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-267093/.minikube/proxy-client-ca.key: {Name:mk0abeb53d31b4abfeac54a5df449d9e6224a2de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:53:55.792898  273363 certs.go:256] generating profile certs ...
	I1211 23:53:55.792968  273363 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/client.key
	I1211 23:53:55.792995  273363 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/client.crt with IP's: []
	I1211 23:53:56.054981  273363 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/client.crt ...
	I1211 23:53:56.055017  273363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/client.crt: {Name:mk35f1b1673ed8179bd483f9acbb1465f024781b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:53:56.055201  273363 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/client.key ...
	I1211 23:53:56.055215  273363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/client.key: {Name:mk73fb31a0e6881eae06c356d452610512a09ae6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:53:56.055299  273363 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/apiserver.key.1f96b591
	I1211 23:53:56.055318  273363 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/apiserver.crt.1f96b591 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1211 23:53:56.406908  273363 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/apiserver.crt.1f96b591 ...
	I1211 23:53:56.406942  273363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/apiserver.crt.1f96b591: {Name:mk5ef5b50bf5b0af6aa2229ad8cc8b616cd41b50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:53:56.407123  273363 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/apiserver.key.1f96b591 ...
	I1211 23:53:56.407137  273363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/apiserver.key.1f96b591: {Name:mkd8712d30c56a797038b17a448278615ff35eef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:53:56.407221  273363 certs.go:381] copying /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/apiserver.crt.1f96b591 -> /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/apiserver.crt
	I1211 23:53:56.407308  273363 certs.go:385] copying /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/apiserver.key.1f96b591 -> /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/apiserver.key
	I1211 23:53:56.407386  273363 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/proxy-client.key
	I1211 23:53:56.407414  273363 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/proxy-client.crt with IP's: []
	I1211 23:53:57.352976  273363 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/proxy-client.crt ...
	I1211 23:53:57.353015  273363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/proxy-client.crt: {Name:mk2a4928ff8b166e35c1fb625d8ba5ea1ee5a2cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:53:57.353213  273363 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/proxy-client.key ...
	I1211 23:53:57.353232  273363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/proxy-client.key: {Name:mk0e895a5a8dd2ede233ffbf83a9fca190c82f89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:53:57.353429  273363 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-267093/.minikube/certs/ca-key.pem (1679 bytes)
	I1211 23:53:57.353474  273363 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-267093/.minikube/certs/ca.pem (1082 bytes)
	I1211 23:53:57.353504  273363 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-267093/.minikube/certs/cert.pem (1123 bytes)
	I1211 23:53:57.353533  273363 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-267093/.minikube/certs/key.pem (1679 bytes)
	I1211 23:53:57.354204  273363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-267093/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1211 23:53:57.385320  273363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-267093/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1211 23:53:57.410544  273363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-267093/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1211 23:53:57.434244  273363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-267093/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1211 23:53:57.458477  273363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1211 23:53:57.482740  273363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1211 23:53:57.506097  273363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1211 23:53:57.529883  273363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1211 23:53:57.553059  273363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-267093/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1211 23:53:57.576861  273363 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1211 23:53:57.594879  273363 ssh_runner.go:195] Run: openssl version
	I1211 23:53:57.600352  273363 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1211 23:53:57.609749  273363 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1211 23:53:57.613366  273363 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:53 /usr/share/ca-certificates/minikubeCA.pem
	I1211 23:53:57.613432  273363 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1211 23:53:57.620526  273363 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1211 23:53:57.629800  273363 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1211 23:53:57.633145  273363 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1211 23:53:57.633216  273363 kubeadm.go:392] StartCluster: {Name:addons-680529 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-680529 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 23:53:57.633303  273363 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1211 23:53:57.633360  273363 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1211 23:53:57.670503  273363 cri.go:89] found id: ""
	I1211 23:53:57.670593  273363 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1211 23:53:57.679270  273363 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1211 23:53:57.687993  273363 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1211 23:53:57.688081  273363 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1211 23:53:57.696794  273363 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1211 23:53:57.696815  273363 kubeadm.go:157] found existing configuration files:
	
	I1211 23:53:57.696865  273363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1211 23:53:57.705251  273363 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1211 23:53:57.705316  273363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1211 23:53:57.713714  273363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1211 23:53:57.722218  273363 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1211 23:53:57.722325  273363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1211 23:53:57.730419  273363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1211 23:53:57.739195  273363 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1211 23:53:57.739290  273363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1211 23:53:57.747508  273363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1211 23:53:57.756548  273363 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1211 23:53:57.756629  273363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1211 23:53:57.765360  273363 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1211 23:53:57.808245  273363 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1211 23:53:57.808619  273363 kubeadm.go:310] [preflight] Running pre-flight checks
	I1211 23:53:57.827671  273363 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I1211 23:53:57.827750  273363 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1072-aws
	I1211 23:53:57.827792  273363 kubeadm.go:310] OS: Linux
	I1211 23:53:57.827842  273363 kubeadm.go:310] CGROUPS_CPU: enabled
	I1211 23:53:57.827899  273363 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I1211 23:53:57.827950  273363 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I1211 23:53:57.828001  273363 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I1211 23:53:57.828053  273363 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I1211 23:53:57.828105  273363 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I1211 23:53:57.828154  273363 kubeadm.go:310] CGROUPS_PIDS: enabled
	I1211 23:53:57.828205  273363 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I1211 23:53:57.828254  273363 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I1211 23:53:57.893107  273363 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1211 23:53:57.893228  273363 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1211 23:53:57.893328  273363 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1211 23:53:57.900764  273363 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1211 23:53:57.905172  273363 out.go:235]   - Generating certificates and keys ...
	I1211 23:53:57.905274  273363 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1211 23:53:57.905339  273363 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1211 23:53:58.203706  273363 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1211 23:53:58.960817  273363 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1211 23:53:59.367623  273363 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1211 23:53:59.616824  273363 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1211 23:54:00.291289  273363 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1211 23:54:00.306669  273363 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-680529 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1211 23:54:01.098211  273363 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1211 23:54:01.098741  273363 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-680529 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1211 23:54:01.492139  273363 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1211 23:54:02.121408  273363 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1211 23:54:02.469189  273363 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1211 23:54:02.469407  273363 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1211 23:54:03.606420  273363 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1211 23:54:04.316355  273363 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1211 23:54:04.781315  273363 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1211 23:54:05.174239  273363 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1211 23:54:05.680509  273363 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1211 23:54:05.681202  273363 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1211 23:54:05.684132  273363 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1211 23:54:05.688022  273363 out.go:235]   - Booting up control plane ...
	I1211 23:54:05.688135  273363 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1211 23:54:05.688222  273363 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1211 23:54:05.688294  273363 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1211 23:54:05.697930  273363 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1211 23:54:05.704196  273363 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1211 23:54:05.704410  273363 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1211 23:54:05.796597  273363 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1211 23:54:05.796725  273363 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1211 23:54:07.298003  273363 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.50151497s
	I1211 23:54:07.298096  273363 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1211 23:54:13.799463  273363 kubeadm.go:310] [api-check] The API server is healthy after 6.501399768s
	I1211 23:54:13.820040  273363 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1211 23:54:13.835028  273363 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1211 23:54:13.862093  273363 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1211 23:54:13.862331  273363 kubeadm.go:310] [mark-control-plane] Marking the node addons-680529 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1211 23:54:13.873106  273363 kubeadm.go:310] [bootstrap-token] Using token: 8wpob4.wstfq9fo1o28lkg0
	I1211 23:54:13.875635  273363 out.go:235]   - Configuring RBAC rules ...
	I1211 23:54:13.875768  273363 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1211 23:54:13.881078  273363 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1211 23:54:13.889362  273363 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1211 23:54:13.893325  273363 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1211 23:54:13.902259  273363 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1211 23:54:13.906515  273363 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1211 23:54:14.207761  273363 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1211 23:54:14.637430  273363 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1211 23:54:15.207319  273363 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1211 23:54:15.208354  273363 kubeadm.go:310] 
	I1211 23:54:15.208427  273363 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1211 23:54:15.208433  273363 kubeadm.go:310] 
	I1211 23:54:15.208510  273363 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1211 23:54:15.208515  273363 kubeadm.go:310] 
	I1211 23:54:15.208540  273363 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1211 23:54:15.208606  273363 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1211 23:54:15.208657  273363 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1211 23:54:15.208662  273363 kubeadm.go:310] 
	I1211 23:54:15.208721  273363 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1211 23:54:15.208727  273363 kubeadm.go:310] 
	I1211 23:54:15.208774  273363 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1211 23:54:15.208779  273363 kubeadm.go:310] 
	I1211 23:54:15.208831  273363 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1211 23:54:15.208905  273363 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1211 23:54:15.208975  273363 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1211 23:54:15.208980  273363 kubeadm.go:310] 
	I1211 23:54:15.209064  273363 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1211 23:54:15.209140  273363 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1211 23:54:15.209145  273363 kubeadm.go:310] 
	I1211 23:54:15.209228  273363 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 8wpob4.wstfq9fo1o28lkg0 \
	I1211 23:54:15.209331  273363 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:907f7936d896e8031b15287260859487f794bdb8c0f9e6400d13c7899dae4a1b \
	I1211 23:54:15.209697  273363 kubeadm.go:310] 	--control-plane 
	I1211 23:54:15.209721  273363 kubeadm.go:310] 
	I1211 23:54:15.209807  273363 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1211 23:54:15.209813  273363 kubeadm.go:310] 
	I1211 23:54:15.209894  273363 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 8wpob4.wstfq9fo1o28lkg0 \
	I1211 23:54:15.209996  273363 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:907f7936d896e8031b15287260859487f794bdb8c0f9e6400d13c7899dae4a1b 
	I1211 23:54:15.212456  273363 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1072-aws\n", err: exit status 1
	I1211 23:54:15.212586  273363 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1211 23:54:15.212607  273363 cni.go:84] Creating CNI manager for ""
	I1211 23:54:15.212616  273363 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1211 23:54:15.215530  273363 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1211 23:54:15.218324  273363 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1211 23:54:15.222074  273363 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1211 23:54:15.222097  273363 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1211 23:54:15.241894  273363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1211 23:54:15.545204  273363 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1211 23:54:15.545347  273363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:54:15.545425  273363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-680529 minikube.k8s.io/updated_at=2024_12_11T23_54_15_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7fa9733e2a94b345defa62bfa3d9dc225cfbd458 minikube.k8s.io/name=addons-680529 minikube.k8s.io/primary=true
	I1211 23:54:15.727321  273363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:54:15.727408  273363 ops.go:34] apiserver oom_adj: -16
	I1211 23:54:16.227619  273363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:54:16.727459  273363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:54:17.227657  273363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:54:17.728243  273363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:54:18.227983  273363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:54:18.727405  273363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:54:19.227395  273363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:54:19.728209  273363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:54:19.858074  273363 kubeadm.go:1113] duration metric: took 4.31277244s to wait for elevateKubeSystemPrivileges
	I1211 23:54:19.858108  273363 kubeadm.go:394] duration metric: took 22.224914095s to StartCluster
	I1211 23:54:19.858126  273363 settings.go:142] acquiring lock: {Name:mk814eae3eecf1bc157101f19f818cc25695a8bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:54:19.858269  273363 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20083-267093/kubeconfig
	I1211 23:54:19.858719  273363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-267093/kubeconfig: {Name:mk58cf12cb3ced247d8613ba49b2fae0b50590ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:54:19.858926  273363 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1211 23:54:19.859083  273363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1211 23:54:19.859347  273363 config.go:182] Loaded profile config "addons-680529": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1211 23:54:19.859395  273363 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1211 23:54:19.859474  273363 addons.go:69] Setting yakd=true in profile "addons-680529"
	I1211 23:54:19.859488  273363 addons.go:234] Setting addon yakd=true in "addons-680529"
	I1211 23:54:19.859513  273363 host.go:66] Checking if "addons-680529" exists ...
	I1211 23:54:19.859741  273363 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-680529"
	I1211 23:54:19.859760  273363 addons.go:234] Setting addon amd-gpu-device-plugin=true in "addons-680529"
	I1211 23:54:19.859781  273363 host.go:66] Checking if "addons-680529" exists ...
	I1211 23:54:19.860307  273363 cli_runner.go:164] Run: docker container inspect addons-680529 --format={{.State.Status}}
	I1211 23:54:19.860783  273363 addons.go:69] Setting cloud-spanner=true in profile "addons-680529"
	I1211 23:54:19.860799  273363 addons.go:234] Setting addon cloud-spanner=true in "addons-680529"
	I1211 23:54:19.860822  273363 host.go:66] Checking if "addons-680529" exists ...
	I1211 23:54:19.861222  273363 cli_runner.go:164] Run: docker container inspect addons-680529 --format={{.State.Status}}
	I1211 23:54:19.862288  273363 cli_runner.go:164] Run: docker container inspect addons-680529 --format={{.State.Status}}
	I1211 23:54:19.864908  273363 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-680529"
	I1211 23:54:19.864979  273363 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-680529"
	I1211 23:54:19.865009  273363 host.go:66] Checking if "addons-680529" exists ...
	I1211 23:54:19.865461  273363 cli_runner.go:164] Run: docker container inspect addons-680529 --format={{.State.Status}}
	I1211 23:54:19.870334  273363 addons.go:69] Setting default-storageclass=true in profile "addons-680529"
	I1211 23:54:19.870378  273363 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-680529"
	I1211 23:54:19.870714  273363 cli_runner.go:164] Run: docker container inspect addons-680529 --format={{.State.Status}}
	I1211 23:54:19.871110  273363 out.go:177] * Verifying Kubernetes components...
	I1211 23:54:19.874628  273363 addons.go:69] Setting registry=true in profile "addons-680529"
	I1211 23:54:19.874656  273363 addons.go:234] Setting addon registry=true in "addons-680529"
	I1211 23:54:19.874694  273363 host.go:66] Checking if "addons-680529" exists ...
	I1211 23:54:19.875182  273363 cli_runner.go:164] Run: docker container inspect addons-680529 --format={{.State.Status}}
	I1211 23:54:19.882981  273363 addons.go:69] Setting storage-provisioner=true in profile "addons-680529"
	I1211 23:54:19.883017  273363 addons.go:234] Setting addon storage-provisioner=true in "addons-680529"
	I1211 23:54:19.883060  273363 host.go:66] Checking if "addons-680529" exists ...
	I1211 23:54:19.883553  273363 cli_runner.go:164] Run: docker container inspect addons-680529 --format={{.State.Status}}
	I1211 23:54:19.891195  273363 addons.go:69] Setting gcp-auth=true in profile "addons-680529"
	I1211 23:54:19.891239  273363 mustload.go:65] Loading cluster: addons-680529
	I1211 23:54:19.891449  273363 config.go:182] Loaded profile config "addons-680529": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1211 23:54:19.891722  273363 cli_runner.go:164] Run: docker container inspect addons-680529 --format={{.State.Status}}
	I1211 23:54:19.902331  273363 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-680529"
	I1211 23:54:19.902376  273363 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-680529"
	I1211 23:54:19.902622  273363 addons.go:69] Setting ingress=true in profile "addons-680529"
	I1211 23:54:19.902649  273363 addons.go:234] Setting addon ingress=true in "addons-680529"
	I1211 23:54:19.902695  273363 host.go:66] Checking if "addons-680529" exists ...
	I1211 23:54:19.902739  273363 cli_runner.go:164] Run: docker container inspect addons-680529 --format={{.State.Status}}
	I1211 23:54:19.903070  273363 addons.go:69] Setting ingress-dns=true in profile "addons-680529"
	I1211 23:54:19.903088  273363 addons.go:234] Setting addon ingress-dns=true in "addons-680529"
	I1211 23:54:19.903152  273363 host.go:66] Checking if "addons-680529" exists ...
	I1211 23:54:19.918246  273363 addons.go:69] Setting volcano=true in profile "addons-680529"
	I1211 23:54:19.918284  273363 addons.go:234] Setting addon volcano=true in "addons-680529"
	I1211 23:54:19.918332  273363 host.go:66] Checking if "addons-680529" exists ...
	I1211 23:54:19.918822  273363 cli_runner.go:164] Run: docker container inspect addons-680529 --format={{.State.Status}}
	I1211 23:54:19.923307  273363 addons.go:69] Setting inspektor-gadget=true in profile "addons-680529"
	I1211 23:54:19.928233  273363 addons.go:234] Setting addon inspektor-gadget=true in "addons-680529"
	I1211 23:54:19.928313  273363 host.go:66] Checking if "addons-680529" exists ...
	I1211 23:54:19.928076  273363 addons.go:69] Setting metrics-server=true in profile "addons-680529"
	I1211 23:54:19.928483  273363 addons.go:234] Setting addon metrics-server=true in "addons-680529"
	I1211 23:54:19.928505  273363 host.go:66] Checking if "addons-680529" exists ...
	I1211 23:54:19.929020  273363 cli_runner.go:164] Run: docker container inspect addons-680529 --format={{.State.Status}}
	I1211 23:54:19.937426  273363 addons.go:69] Setting volumesnapshots=true in profile "addons-680529"
	I1211 23:54:19.937730  273363 addons.go:234] Setting addon volumesnapshots=true in "addons-680529"
	I1211 23:54:19.937895  273363 host.go:66] Checking if "addons-680529" exists ...
	I1211 23:54:19.928096  273363 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-680529"
	I1211 23:54:19.957797  273363 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-680529"
	I1211 23:54:19.957855  273363 host.go:66] Checking if "addons-680529" exists ...
	I1211 23:54:19.928173  273363 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 23:54:20.005612  273363 cli_runner.go:164] Run: docker container inspect addons-680529 --format={{.State.Status}}
	I1211 23:54:20.043275  273363 cli_runner.go:164] Run: docker container inspect addons-680529 --format={{.State.Status}}
	I1211 23:54:20.052925  273363 cli_runner.go:164] Run: docker container inspect addons-680529 --format={{.State.Status}}
	I1211 23:54:20.065856  273363 cli_runner.go:164] Run: docker container inspect addons-680529 --format={{.State.Status}}
	I1211 23:54:20.082385  273363 cli_runner.go:164] Run: docker container inspect addons-680529 --format={{.State.Status}}
	I1211 23:54:20.082869  273363 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1211 23:54:20.099569  273363 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.25
	I1211 23:54:20.104206  273363 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1211 23:54:20.104236  273363 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1211 23:54:20.104314  273363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-680529
	I1211 23:54:20.127127  273363 addons.go:234] Setting addon default-storageclass=true in "addons-680529"
	I1211 23:54:20.127192  273363 host.go:66] Checking if "addons-680529" exists ...
	I1211 23:54:20.129146  273363 cli_runner.go:164] Run: docker container inspect addons-680529 --format={{.State.Status}}
	I1211 23:54:20.134985  273363 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1211 23:54:20.135009  273363 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1211 23:54:20.135086  273363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-680529
	I1211 23:54:20.152035  273363 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1211 23:54:20.152295  273363 host.go:66] Checking if "addons-680529" exists ...
	W1211 23:54:20.169129  273363 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1211 23:54:20.172055  273363 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1211 23:54:20.172076  273363 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1211 23:54:20.172554  273363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-680529
	I1211 23:54:20.173775  273363 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1211 23:54:20.174484  273363 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I1211 23:54:20.187839  273363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1211 23:54:20.191228  273363 out.go:177]   - Using image docker.io/registry:2.8.3
	I1211 23:54:20.196155  273363 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1211 23:54:20.198361  273363 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1211 23:54:20.198400  273363 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1211 23:54:20.198476  273363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-680529
	I1211 23:54:20.208334  273363 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 23:54:20.208410  273363 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1211 23:54:20.208517  273363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-680529
	I1211 23:54:20.228804  273363 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1211 23:54:20.233197  273363 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1211 23:54:20.234507  273363 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-680529"
	I1211 23:54:20.234549  273363 host.go:66] Checking if "addons-680529" exists ...
	I1211 23:54:20.235517  273363 cli_runner.go:164] Run: docker container inspect addons-680529 --format={{.State.Status}}
	I1211 23:54:20.249752  273363 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1211 23:54:20.252415  273363 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1211 23:54:20.252444  273363 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1211 23:54:20.252511  273363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-680529
	I1211 23:54:20.303850  273363 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I1211 23:54:20.305739  273363 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1211 23:54:20.305820  273363 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.35.0
	I1211 23:54:20.305852  273363 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1211 23:54:20.306936  273363 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1211 23:54:20.306956  273363 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1211 23:54:20.307019  273363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-680529
	I1211 23:54:20.316965  273363 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1211 23:54:20.317000  273363 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I1211 23:54:20.317077  273363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-680529
	I1211 23:54:20.339003  273363 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1211 23:54:20.339028  273363 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1211 23:54:20.339111  273363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-680529
	I1211 23:54:20.346261  273363 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1211 23:54:20.347042  273363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/20083-267093/.minikube/machines/addons-680529/id_rsa Username:docker}
	I1211 23:54:20.349164  273363 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1211 23:54:20.350908  273363 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1211 23:54:20.354350  273363 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1211 23:54:20.354491  273363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/20083-267093/.minikube/machines/addons-680529/id_rsa Username:docker}
	I1211 23:54:20.354989  273363 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1211 23:54:20.355006  273363 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1211 23:54:20.355079  273363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-680529
	I1211 23:54:20.389872  273363 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1211 23:54:20.394317  273363 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1211 23:54:20.396369  273363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/20083-267093/.minikube/machines/addons-680529/id_rsa Username:docker}
	I1211 23:54:20.398242  273363 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1211 23:54:20.406311  273363 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I1211 23:54:20.406442  273363 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1211 23:54:20.406454  273363 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1211 23:54:20.406528  273363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-680529
	I1211 23:54:20.406806  273363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/20083-267093/.minikube/machines/addons-680529/id_rsa Username:docker}
	I1211 23:54:20.409528  273363 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1211 23:54:20.409555  273363 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1211 23:54:20.409620  273363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-680529
	I1211 23:54:20.421347  273363 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1211 23:54:20.421368  273363 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1211 23:54:20.421432  273363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-680529
	I1211 23:54:20.443059  273363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/20083-267093/.minikube/machines/addons-680529/id_rsa Username:docker}
	I1211 23:54:20.447038  273363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/20083-267093/.minikube/machines/addons-680529/id_rsa Username:docker}
	I1211 23:54:20.485389  273363 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1211 23:54:20.487983  273363 out.go:177]   - Using image docker.io/busybox:stable
	I1211 23:54:20.495688  273363 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1211 23:54:20.495710  273363 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1211 23:54:20.495774  273363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-680529
	I1211 23:54:20.558391  273363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/20083-267093/.minikube/machines/addons-680529/id_rsa Username:docker}
	I1211 23:54:20.559060  273363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/20083-267093/.minikube/machines/addons-680529/id_rsa Username:docker}
	I1211 23:54:20.575533  273363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/20083-267093/.minikube/machines/addons-680529/id_rsa Username:docker}
	I1211 23:54:20.577426  273363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/20083-267093/.minikube/machines/addons-680529/id_rsa Username:docker}
	I1211 23:54:20.587210  273363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/20083-267093/.minikube/machines/addons-680529/id_rsa Username:docker}
	I1211 23:54:20.604184  273363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/20083-267093/.minikube/machines/addons-680529/id_rsa Username:docker}
	I1211 23:54:20.609850  273363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/20083-267093/.minikube/machines/addons-680529/id_rsa Username:docker}
	I1211 23:54:20.630188  273363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/20083-267093/.minikube/machines/addons-680529/id_rsa Username:docker}
	I1211 23:54:20.912189  273363 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1211 23:54:20.912215  273363 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1211 23:54:20.923688  273363 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1211 23:54:20.923715  273363 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1211 23:54:20.933128  273363 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1211 23:54:20.969541  273363 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1211 23:54:20.969566  273363 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1211 23:54:20.982102  273363 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 23:54:21.001243  273363 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1211 23:54:21.001270  273363 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1211 23:54:21.035929  273363 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1211 23:54:21.035951  273363 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14576 bytes)
	I1211 23:54:21.090449  273363 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1211 23:54:21.096193  273363 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1211 23:54:21.132833  273363 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1211 23:54:21.136833  273363 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1211 23:54:21.136906  273363 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1211 23:54:21.140172  273363 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1211 23:54:21.172014  273363 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1211 23:54:21.172086  273363 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1211 23:54:21.185687  273363 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1211 23:54:21.190398  273363 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1211 23:54:21.190470  273363 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1211 23:54:21.194533  273363 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1211 23:54:21.198899  273363 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.236594248s)
	I1211 23:54:21.199011  273363 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1211 23:54:21.203589  273363 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1211 23:54:21.203664  273363 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1211 23:54:21.209827  273363 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1211 23:54:21.237417  273363 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1211 23:54:21.347532  273363 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1211 23:54:21.347610  273363 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1211 23:54:21.419329  273363 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1211 23:54:21.419407  273363 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1211 23:54:21.447244  273363 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1211 23:54:21.447321  273363 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1211 23:54:21.457742  273363 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1211 23:54:21.457819  273363 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1211 23:54:21.576089  273363 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1211 23:54:21.607276  273363 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1211 23:54:21.607350  273363 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1211 23:54:21.638898  273363 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1211 23:54:21.638979  273363 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1211 23:54:21.663496  273363 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1211 23:54:21.663566  273363 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1211 23:54:21.809177  273363 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1211 23:54:21.850817  273363 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1211 23:54:21.850895  273363 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1211 23:54:21.854635  273363 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1211 23:54:21.854712  273363 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1211 23:54:21.968757  273363 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1211 23:54:21.968779  273363 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1211 23:54:21.976710  273363 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1211 23:54:21.976781  273363 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1211 23:54:22.048462  273363 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1211 23:54:22.064980  273363 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1211 23:54:22.065054  273363 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1211 23:54:22.136755  273363 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1211 23:54:22.136835  273363 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1211 23:54:22.195297  273363 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1211 23:54:22.195370  273363 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1211 23:54:22.351779  273363 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1211 23:54:22.351842  273363 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1211 23:54:22.522050  273363 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1211 23:54:22.522132  273363 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1211 23:54:22.697289  273363 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1211 23:54:23.085069  273363 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.897188221s)
	I1211 23:54:23.085163  273363 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1211 23:54:24.169994  273363 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-680529" context rescaled to 1 replicas
	I1211 23:54:25.128048  273363 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.194881013s)
	I1211 23:54:26.109492  273363 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.127349013s)
	I1211 23:54:26.109590  273363 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (5.019120876s)
	I1211 23:54:26.109665  273363 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.976760436s)
	I1211 23:54:26.109890  273363 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.969648397s)
	I1211 23:54:26.109927  273363 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.924180177s)
	I1211 23:54:26.109985  273363 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.013414442s)
	I1211 23:54:26.110004  273363 addons.go:475] Verifying addon registry=true in "addons-680529"
	I1211 23:54:26.112957  273363 out.go:177] * Verifying registry addon...
	I1211 23:54:26.116446  273363 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1211 23:54:26.124038  273363 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.929431734s)
	I1211 23:54:26.124221  273363 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.925185617s)
	I1211 23:54:26.125071  273363 node_ready.go:35] waiting up to 6m0s for node "addons-680529" to be "Ready" ...
	I1211 23:54:26.162992  273363 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1211 23:54:26.163022  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1211 23:54:26.191062  273363 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1211 23:54:26.391263  273363 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (5.181344081s)
	I1211 23:54:26.660156  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:27.123710  273363 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.547535075s)
	I1211 23:54:27.123768  273363 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.314507093s)
	I1211 23:54:27.123789  273363 addons.go:475] Verifying addon metrics-server=true in "addons-680529"
	I1211 23:54:27.123857  273363 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.886372813s)
	I1211 23:54:27.123881  273363 addons.go:475] Verifying addon ingress=true in "addons-680529"
	I1211 23:54:27.126739  273363 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-680529 service yakd-dashboard -n yakd-dashboard
	
	I1211 23:54:27.126934  273363 out.go:177] * Verifying ingress addon...
	I1211 23:54:27.130674  273363 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1211 23:54:27.159300  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:27.160252  273363 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1211 23:54:27.160302  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:27.296838  273363 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.248278878s)
	W1211 23:54:27.296925  273363 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1211 23:54:27.296967  273363 retry.go:31] will retry after 165.916789ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1211 23:54:27.463414  273363 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1211 23:54:27.620767  273363 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.923372321s)
	I1211 23:54:27.620851  273363 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-680529"
	I1211 23:54:27.623241  273363 out.go:177] * Verifying csi-hostpath-driver addon...
	I1211 23:54:27.625994  273363 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1211 23:54:27.639982  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:27.641488  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:27.642967  273363 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1211 23:54:27.643029  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:28.120366  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:28.130214  273363 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1211 23:54:28.130737  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:28.130713  273363 node_ready.go:53] node "addons-680529" has status "Ready":"False"
	I1211 23:54:28.134768  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:28.620879  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:28.630618  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:28.634236  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:29.119893  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:29.129853  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:29.134659  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:29.620198  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:29.629853  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:29.634584  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:29.779161  273363 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1211 23:54:29.779244  273363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-680529
	I1211 23:54:29.796490  273363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/20083-267093/.minikube/machines/addons-680529/id_rsa Username:docker}
	I1211 23:54:29.907471  273363 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1211 23:54:29.940138  273363 addons.go:234] Setting addon gcp-auth=true in "addons-680529"
	I1211 23:54:29.940187  273363 host.go:66] Checking if "addons-680529" exists ...
	I1211 23:54:29.940658  273363 cli_runner.go:164] Run: docker container inspect addons-680529 --format={{.State.Status}}
	I1211 23:54:29.960885  273363 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1211 23:54:29.960941  273363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-680529
	I1211 23:54:29.983958  273363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/20083-267093/.minikube/machines/addons-680529/id_rsa Username:docker}
	I1211 23:54:30.126582  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:30.139769  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:30.144397  273363 node_ready.go:53] node "addons-680529" has status "Ready":"False"
	I1211 23:54:30.144745  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:30.297057  273363 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.833545548s)
	I1211 23:54:30.300211  273363 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1211 23:54:30.302810  273363 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1211 23:54:30.305274  273363 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1211 23:54:30.305307  273363 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1211 23:54:30.323881  273363 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1211 23:54:30.323909  273363 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1211 23:54:30.343795  273363 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1211 23:54:30.343857  273363 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1211 23:54:30.362858  273363 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1211 23:54:30.621377  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:30.633146  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:30.636257  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:30.884218  273363 addons.go:475] Verifying addon gcp-auth=true in "addons-680529"
	I1211 23:54:30.888490  273363 out.go:177] * Verifying gcp-auth addon...
	I1211 23:54:30.891918  273363 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1211 23:54:30.896094  273363 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1211 23:54:30.896117  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:31.120559  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:31.131431  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:31.134698  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:31.395158  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:31.619953  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:31.629922  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:31.634451  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:31.895950  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:32.120088  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:32.131178  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:32.134177  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:32.395608  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:32.622022  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:32.628179  273363 node_ready.go:53] node "addons-680529" has status "Ready":"False"
	I1211 23:54:32.629859  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:32.634250  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:32.895683  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:33.120201  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:33.132144  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:33.134657  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:33.395223  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:33.622327  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:33.629746  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:33.634825  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:33.895406  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:34.119591  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:34.130235  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:34.134783  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:34.395355  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:34.620458  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:34.629309  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:34.629786  273363 node_ready.go:53] node "addons-680529" has status "Ready":"False"
	I1211 23:54:34.634646  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:34.895224  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:35.120660  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:35.131151  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:35.134708  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:35.395016  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:35.620240  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:35.629871  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:35.634274  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:35.895349  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:36.119980  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:36.131200  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:36.134540  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:36.395013  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:36.620228  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:36.629934  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:36.630491  273363 node_ready.go:53] node "addons-680529" has status "Ready":"False"
	I1211 23:54:36.634931  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:36.895268  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:37.120240  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:37.131559  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:37.134485  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:37.395872  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:37.620010  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:37.630131  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:37.635029  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:37.895248  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:38.119372  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:38.129343  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:38.140385  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:38.395567  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:38.620763  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:38.630409  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:38.634040  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:38.895271  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:39.119844  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:39.128896  273363 node_ready.go:53] node "addons-680529" has status "Ready":"False"
	I1211 23:54:39.131666  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:39.134162  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:39.395146  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:39.620412  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:39.630071  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:39.634901  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:39.895327  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:40.120599  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:40.130586  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:40.135612  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:40.395283  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:40.620520  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:40.630695  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:40.634268  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:40.895600  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:41.120286  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:41.130513  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:41.135000  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:41.395376  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:41.619775  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:41.630091  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:41.630508  273363 node_ready.go:53] node "addons-680529" has status "Ready":"False"
	I1211 23:54:41.634783  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:41.895405  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:42.120167  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:42.131995  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:42.135862  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:42.395583  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:42.620367  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:42.630534  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:42.634559  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:42.894911  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:43.119916  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:43.132790  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:43.134674  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:43.395046  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:43.619538  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:43.630259  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:43.634367  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:43.895722  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:44.119746  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:44.128625  273363 node_ready.go:53] node "addons-680529" has status "Ready":"False"
	I1211 23:54:44.130450  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:44.134906  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:44.395344  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:44.620371  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:44.630726  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:44.634186  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:44.895973  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:45.120942  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:45.143672  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:45.144132  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:45.395279  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:45.620438  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:45.629807  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:45.634575  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:45.895989  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:46.120517  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:46.128821  273363 node_ready.go:53] node "addons-680529" has status "Ready":"False"
	I1211 23:54:46.131700  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:46.134332  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:46.395878  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:46.620353  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:46.629825  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:46.634660  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:46.894849  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:47.119521  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:47.131303  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:47.134563  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:47.395702  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:47.619551  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:47.629880  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:47.634614  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:47.895315  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:48.120091  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:48.129638  273363 node_ready.go:53] node "addons-680529" has status "Ready":"False"
	I1211 23:54:48.134296  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:48.136076  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:48.395444  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:48.620380  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:48.631204  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:48.634400  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:48.899608  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:49.119894  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:49.129991  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:49.134872  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:49.394980  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:49.620716  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:49.629579  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:49.635153  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:49.895562  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:50.119884  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:50.130753  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:50.131359  273363 node_ready.go:53] node "addons-680529" has status "Ready":"False"
	I1211 23:54:50.134528  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:50.395846  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:50.621379  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:50.629374  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:50.634318  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:50.895532  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:51.120423  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:51.129594  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:51.135296  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:51.395717  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:51.619730  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:51.630394  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:51.634107  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:51.895582  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:52.119900  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:52.130581  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:52.134302  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:52.395634  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:52.619878  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:52.628740  273363 node_ready.go:53] node "addons-680529" has status "Ready":"False"
	I1211 23:54:52.630331  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:52.634382  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:52.895516  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:53.119629  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:53.136027  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:53.136572  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:53.395551  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:53.620416  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:53.630418  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:53.636086  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:53.895158  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:54.120307  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:54.129598  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:54.134661  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:54.395174  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:54.620315  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:54.630459  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:54.634294  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:54.895655  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:55.120431  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:55.128842  273363 node_ready.go:53] node "addons-680529" has status "Ready":"False"
	I1211 23:54:55.129883  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:55.134780  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:55.395218  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:55.619270  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:55.630124  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:55.634905  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:55.895814  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:56.119869  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:56.130867  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:56.133986  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:56.395730  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:56.620096  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:56.631020  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:56.634376  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:56.895475  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:57.119960  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:57.130077  273363 node_ready.go:53] node "addons-680529" has status "Ready":"False"
	I1211 23:54:57.131753  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:57.134603  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:57.396002  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:57.620071  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:57.631816  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:57.634672  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:57.895175  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:58.120449  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:58.136294  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:58.138221  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:58.395610  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:58.620108  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:58.629277  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:58.635091  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:58.895594  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:59.119685  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:59.130987  273363 node_ready.go:53] node "addons-680529" has status "Ready":"False"
	I1211 23:54:59.131169  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:59.133995  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:59.395539  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:59.620597  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:59.630369  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:59.634908  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:59.896212  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:00.120910  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:55:00.162727  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:00.164141  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:00.395606  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:00.621059  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:55:00.631181  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:00.636794  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:00.895207  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:01.120054  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:55:01.131149  273363 node_ready.go:53] node "addons-680529" has status "Ready":"False"
	I1211 23:55:01.132655  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:01.136958  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:01.395414  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:01.621119  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:55:01.630870  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:01.634711  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:01.895372  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:02.120939  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:55:02.130976  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:02.134728  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:02.395443  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:02.620203  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:55:02.629678  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:02.634527  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:02.896086  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:03.119866  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:55:03.130202  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:03.135406  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:03.395947  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:03.620025  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:55:03.628392  273363 node_ready.go:53] node "addons-680529" has status "Ready":"False"
	I1211 23:55:03.629802  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:03.635017  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:03.895656  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:04.120009  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:55:04.131263  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:04.134483  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:04.395851  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:04.619674  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:55:04.630268  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:04.634809  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:04.895438  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:05.119749  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:55:05.134404  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:05.135700  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:05.395361  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:05.619579  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:55:05.628966  273363 node_ready.go:53] node "addons-680529" has status "Ready":"False"
	I1211 23:55:05.630309  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:05.633874  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:05.895128  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:06.120083  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:55:06.131639  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:06.134571  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:06.395021  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:06.619962  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:55:06.630332  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:06.634256  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:06.895761  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:07.139764  273363 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1211 23:55:07.139788  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:55:07.143877  273363 node_ready.go:49] node "addons-680529" has status "Ready":"True"
	I1211 23:55:07.143901  273363 node_ready.go:38] duration metric: took 41.018799567s for node "addons-680529" to be "Ready" ...
	I1211 23:55:07.143912  273363 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1211 23:55:07.163487  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:07.165850  273363 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1211 23:55:07.165912  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:07.170615  273363 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ltfkm" in "kube-system" namespace to be "Ready" ...
	I1211 23:55:07.484946  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:07.620593  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:55:07.631056  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:07.634943  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:07.897339  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:08.122493  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:55:08.223988  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:08.225144  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:08.396538  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:08.620788  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:55:08.631216  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:08.634666  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:08.895824  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:09.120424  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:55:09.132452  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:09.137089  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:09.178527  273363 pod_ready.go:93] pod "coredns-7c65d6cfc9-ltfkm" in "kube-system" namespace has status "Ready":"True"
	I1211 23:55:09.178601  273363 pod_ready.go:82] duration metric: took 2.0079513s for pod "coredns-7c65d6cfc9-ltfkm" in "kube-system" namespace to be "Ready" ...
	I1211 23:55:09.178647  273363 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-680529" in "kube-system" namespace to be "Ready" ...
	I1211 23:55:09.194866  273363 pod_ready.go:93] pod "etcd-addons-680529" in "kube-system" namespace has status "Ready":"True"
	I1211 23:55:09.194929  273363 pod_ready.go:82] duration metric: took 16.261201ms for pod "etcd-addons-680529" in "kube-system" namespace to be "Ready" ...
	I1211 23:55:09.194967  273363 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-680529" in "kube-system" namespace to be "Ready" ...
	I1211 23:55:09.209691  273363 pod_ready.go:93] pod "kube-apiserver-addons-680529" in "kube-system" namespace has status "Ready":"True"
	I1211 23:55:09.209762  273363 pod_ready.go:82] duration metric: took 14.774718ms for pod "kube-apiserver-addons-680529" in "kube-system" namespace to be "Ready" ...
	I1211 23:55:09.209790  273363 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-680529" in "kube-system" namespace to be "Ready" ...
	I1211 23:55:09.243158  273363 pod_ready.go:93] pod "kube-controller-manager-addons-680529" in "kube-system" namespace has status "Ready":"True"
	I1211 23:55:09.243231  273363 pod_ready.go:82] duration metric: took 33.418905ms for pod "kube-controller-manager-addons-680529" in "kube-system" namespace to be "Ready" ...
	I1211 23:55:09.243262  273363 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rl6lb" in "kube-system" namespace to be "Ready" ...
	I1211 23:55:09.264589  273363 pod_ready.go:93] pod "kube-proxy-rl6lb" in "kube-system" namespace has status "Ready":"True"
	I1211 23:55:09.264662  273363 pod_ready.go:82] duration metric: took 21.377089ms for pod "kube-proxy-rl6lb" in "kube-system" namespace to be "Ready" ...
	I1211 23:55:09.264690  273363 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-680529" in "kube-system" namespace to be "Ready" ...
	I1211 23:55:09.396632  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:09.575937  273363 pod_ready.go:93] pod "kube-scheduler-addons-680529" in "kube-system" namespace has status "Ready":"True"
	I1211 23:55:09.576018  273363 pod_ready.go:82] duration metric: took 311.291529ms for pod "kube-scheduler-addons-680529" in "kube-system" namespace to be "Ready" ...
	I1211 23:55:09.576045  273363 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace to be "Ready" ...
	I1211 23:55:09.621105  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:55:09.634365  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:09.640428  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:09.896149  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:10.121096  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:55:10.131093  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:10.136731  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:10.396091  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:10.620832  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:55:10.631456  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:10.635428  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:10.896134  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:11.120879  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:55:11.131912  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:11.136058  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:11.395856  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:11.582046  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:55:11.620837  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:55:11.631037  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:11.635342  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:11.896572  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:12.121359  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:55:12.134937  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:12.140495  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:12.396503  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:12.620923  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:55:12.631865  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:12.636593  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:12.897163  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:13.121044  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:55:13.136015  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:13.144241  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:13.396658  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:13.583324  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:55:13.625543  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:55:13.634269  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:13.637736  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:13.896839  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:14.121128  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:55:14.131466  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:14.135210  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:14.396263  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:14.621051  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:55:14.630983  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:14.634895  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:14.895697  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:15.122298  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:55:15.132827  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:15.134816  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:15.395508  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:15.584552  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:55:15.621041  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:55:15.631816  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:15.635355  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:15.896864  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:16.124550  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:55:16.134000  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:16.137950  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:16.396133  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:16.620508  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:55:16.632736  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:16.635382  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:16.896539  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:17.122608  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:55:17.133282  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:17.137667  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:17.396323  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:17.624040  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:55:17.632563  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:17.636804  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:17.895514  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:18.083147  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:55:18.120295  273363 kapi.go:107] duration metric: took 52.003846663s to wait for kubernetes.io/minikube-addons=registry ...
	I1211 23:55:18.132790  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:18.141601  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:18.396753  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:18.631791  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:18.635945  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:18.895468  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:19.132359  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:19.137655  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:19.395717  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:19.631072  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:19.636841  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:19.896608  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:20.087274  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:55:20.153725  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:20.162096  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:20.401673  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:20.632324  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:20.637202  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:20.896506  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:21.132073  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:21.137100  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:21.397217  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:21.632639  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:21.637339  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:21.896202  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:22.132690  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:22.137591  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:22.396719  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:22.582839  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:55:22.631964  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:22.636999  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:22.895834  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:23.134279  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:23.136927  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:23.395713  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:23.664268  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:23.666703  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:23.895423  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:24.142916  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:24.144572  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:24.396665  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:24.591826  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:55:24.634005  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:24.637388  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:24.897265  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:25.144113  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:25.145968  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:25.395595  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:25.632617  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:25.636674  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:25.897051  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:26.132669  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:26.139108  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:26.397244  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:26.635974  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:26.638312  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:26.898697  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:27.083309  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:55:27.134029  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:27.137572  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:27.396574  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:27.632587  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:27.637131  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:27.895751  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:28.138311  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:28.138574  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:28.396374  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:28.632427  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:28.635434  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:28.896313  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:29.085151  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:55:29.138518  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:29.139826  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:29.396503  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:29.631145  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:29.634822  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:29.895453  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:30.134801  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:30.136071  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:30.396290  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:30.631385  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:30.634891  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:30.895348  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:31.087385  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:55:31.142570  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:31.144381  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:31.395963  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:31.633646  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:31.636976  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:31.896229  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:32.135720  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:32.136703  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:32.398858  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:32.631867  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:32.635462  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:32.896336  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:33.131161  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:33.140945  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:33.401245  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:33.584322  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:55:33.632242  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:33.636504  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:33.897106  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:34.131997  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:34.135784  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:34.397224  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:34.632733  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:34.639843  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:34.895514  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:35.133870  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:35.139052  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:35.396080  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:35.633814  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:35.637614  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:35.896894  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:36.084639  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:55:36.135211  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:36.139177  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:36.396278  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:36.631381  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:36.635037  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:36.895965  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:37.132628  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:37.135218  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:37.398897  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:37.631864  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:37.635392  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:37.895480  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:38.132634  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:38.136649  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:38.396257  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:38.582561  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:55:38.631258  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:38.634839  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:38.895157  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:39.131520  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:39.135213  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:39.396289  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:39.631228  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:39.635124  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:39.895746  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:40.133367  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:40.136684  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:40.396465  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:40.582857  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:55:40.631633  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:40.635786  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:40.895564  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:41.131784  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:41.135830  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:41.395667  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:41.631227  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:41.635562  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:41.900915  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:42.132172  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:42.137643  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:42.395944  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:42.632339  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:42.636383  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:42.895801  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:43.083634  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:55:43.140743  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:43.146449  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:43.396028  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:43.637659  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:43.640638  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:43.896595  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:44.132143  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:44.136663  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:44.395210  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:44.635819  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:44.646722  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:44.931645  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:45.103273  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:55:45.135988  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:45.139712  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:45.397709  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:45.631286  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:45.635154  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:45.896978  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:46.136766  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:46.142791  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:46.398377  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:46.632088  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:46.635845  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:46.895591  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:47.133953  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:47.138828  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:47.398329  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:47.583338  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:55:47.640165  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:47.641616  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:47.897161  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:48.134111  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:48.141106  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:48.395872  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:48.640522  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:48.640706  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:48.896514  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:49.131704  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:49.135671  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:49.395492  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:49.631896  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:49.637149  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:49.895729  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:50.085771  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:55:50.132940  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:50.136859  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:50.396638  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:50.632719  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:50.637217  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:50.895428  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:51.135669  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:51.138548  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:51.404054  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:51.635176  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:51.644240  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:51.896435  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:52.134070  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:52.138537  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:52.396210  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:52.584070  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:55:52.636401  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:52.636819  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:52.896723  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:53.159218  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:53.161064  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:53.398009  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:53.641328  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:53.643514  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:53.897759  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:54.140570  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:54.144117  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:54.395853  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:54.631867  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:54.635361  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:54.895826  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:55.082715  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:55:55.131804  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:55.136392  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:55.395793  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:55.630739  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:55.634938  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:55.895438  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:56.134975  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:56.136914  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:56.395475  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:56.634291  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:56.640688  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:56.895596  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:57.084031  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:55:57.133679  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:57.143869  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:57.396429  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:57.632015  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:57.637950  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:57.896452  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:58.137125  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:58.139420  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:58.396912  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:58.632294  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:58.635490  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:58.896409  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:59.084956  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:55:59.132096  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:59.136692  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:59.396167  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:59.633579  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:59.637498  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:59.896063  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:00.146644  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:00.187102  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:00.473739  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:00.632016  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:00.635042  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:00.896185  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:01.132267  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:01.136868  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:01.414472  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:01.588816  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:56:01.645412  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:01.647636  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:01.896400  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:02.132737  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:02.134947  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:02.395625  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:02.632536  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:02.641942  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:02.895870  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:03.133984  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:03.140756  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:03.398209  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:03.633254  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:03.637757  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:03.900000  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:04.089146  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:56:04.133315  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:04.135364  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:04.395916  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:04.638903  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:04.639185  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:04.895881  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:05.131842  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:05.135828  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:05.395745  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:05.634899  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:05.638829  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:05.913179  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:06.133113  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:06.136977  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:06.396179  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:06.582374  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:56:06.632634  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:06.635435  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:06.896867  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:07.138792  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:07.139203  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:07.395646  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:07.631988  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:07.636612  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:07.897366  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:08.132152  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:08.137544  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:08.396290  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:08.586982  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:56:08.632454  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:08.634730  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:08.895870  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:09.133165  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:09.138520  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:09.396612  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:09.634562  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:09.638392  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:09.896173  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:10.141117  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:10.148775  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:10.396081  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:10.635461  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:10.640211  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:10.896460  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:11.083749  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:56:11.132889  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:11.136334  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:11.396154  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:11.632088  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:11.636469  273363 kapi.go:107] duration metric: took 1m44.505789982s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1211 23:56:11.896334  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:12.131162  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:12.396076  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:12.633019  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:12.897604  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:13.133443  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:13.395651  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:13.587136  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:56:13.635324  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:13.895920  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:14.132162  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:14.395081  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:14.632037  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:14.896173  273363 kapi.go:107] duration metric: took 1m44.00425351s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1211 23:56:14.899175  273363 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-680529 cluster.
	I1211 23:56:14.901743  273363 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1211 23:56:14.904248  273363 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1211 23:56:15.132518  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:15.633124  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:16.082895  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:56:16.133547  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:16.638717  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:17.131958  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:17.631831  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:18.083132  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:56:18.146998  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:18.632988  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:19.132663  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:19.631743  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:20.085326  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:56:20.140971  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:20.632160  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:21.132575  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:21.631011  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:22.132447  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:22.583409  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:56:22.635209  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:23.132783  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:23.631673  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:24.132455  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:24.584094  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:56:24.632875  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:25.134062  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:25.632006  273363 kapi.go:107] duration metric: took 1m58.006010063s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1211 23:56:25.633960  273363 out.go:177] * Enabled addons: cloud-spanner, storage-provisioner, amd-gpu-device-plugin, ingress-dns, nvidia-device-plugin, storage-provisioner-rancher, inspektor-gadget, metrics-server, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1211 23:56:25.635279  273363 addons.go:510] duration metric: took 2m5.775875639s for enable addons: enabled=[cloud-spanner storage-provisioner amd-gpu-device-plugin ingress-dns nvidia-device-plugin storage-provisioner-rancher inspektor-gadget metrics-server yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1211 23:56:27.082986  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:56:29.083043  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:56:31.583162  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:56:34.082432  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:56:36.083941  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:56:38.582349  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:56:40.583491  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:56:43.083543  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:56:45.086045  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:56:47.582507  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:56:49.582915  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:56:52.083047  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:56:54.083130  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:56:56.086544  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:56:58.083614  273363 pod_ready.go:93] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"True"
	I1211 23:56:58.083645  273363 pod_ready.go:82] duration metric: took 1m48.507571525s for pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace to be "Ready" ...
	I1211 23:56:58.083658  273363 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-pcmmw" in "kube-system" namespace to be "Ready" ...
	I1211 23:56:58.096690  273363 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-pcmmw" in "kube-system" namespace has status "Ready":"True"
	I1211 23:56:58.096732  273363 pod_ready.go:82] duration metric: took 13.05836ms for pod "nvidia-device-plugin-daemonset-pcmmw" in "kube-system" namespace to be "Ready" ...
	I1211 23:56:58.096757  273363 pod_ready.go:39] duration metric: took 1m50.952832861s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1211 23:56:58.096778  273363 api_server.go:52] waiting for apiserver process to appear ...
	I1211 23:56:58.096816  273363 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 23:56:58.096899  273363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 23:56:58.166276  273363 cri.go:89] found id: "a7c5aafc3840bb42186d9aea30a256badd54fa94e27ee9566d8805b029ea85a9"
	I1211 23:56:58.166306  273363 cri.go:89] found id: ""
	I1211 23:56:58.166315  273363 logs.go:282] 1 containers: [a7c5aafc3840bb42186d9aea30a256badd54fa94e27ee9566d8805b029ea85a9]
	I1211 23:56:58.166372  273363 ssh_runner.go:195] Run: which crictl
	I1211 23:56:58.170754  273363 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 23:56:58.170830  273363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 23:56:58.209581  273363 cri.go:89] found id: "df37df1745de7383f027e1f0be0f193bd67d66fbef8865d42ff03f2555701a30"
	I1211 23:56:58.209605  273363 cri.go:89] found id: ""
	I1211 23:56:58.209614  273363 logs.go:282] 1 containers: [df37df1745de7383f027e1f0be0f193bd67d66fbef8865d42ff03f2555701a30]
	I1211 23:56:58.209671  273363 ssh_runner.go:195] Run: which crictl
	I1211 23:56:58.213465  273363 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 23:56:58.213592  273363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 23:56:58.265032  273363 cri.go:89] found id: "a3933957bd198371a01dc1552aa688ea7af1eef840091608a7b42cfed0079b1f"
	I1211 23:56:58.265098  273363 cri.go:89] found id: ""
	I1211 23:56:58.265120  273363 logs.go:282] 1 containers: [a3933957bd198371a01dc1552aa688ea7af1eef840091608a7b42cfed0079b1f]
	I1211 23:56:58.265203  273363 ssh_runner.go:195] Run: which crictl
	I1211 23:56:58.268819  273363 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 23:56:58.268935  273363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 23:56:58.307716  273363 cri.go:89] found id: "a8aa020a72093940be9978a746548762becd42c86b1a2fbbcec72c604d118bb8"
	I1211 23:56:58.307782  273363 cri.go:89] found id: ""
	I1211 23:56:58.307805  273363 logs.go:282] 1 containers: [a8aa020a72093940be9978a746548762becd42c86b1a2fbbcec72c604d118bb8]
	I1211 23:56:58.307921  273363 ssh_runner.go:195] Run: which crictl
	I1211 23:56:58.311994  273363 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 23:56:58.312140  273363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 23:56:58.350576  273363 cri.go:89] found id: "f5b9aebd301a382d4a983705cfb7c3bf32fe23eaa2d4a2e7438cad2251148c73"
	I1211 23:56:58.350613  273363 cri.go:89] found id: ""
	I1211 23:56:58.350622  273363 logs.go:282] 1 containers: [f5b9aebd301a382d4a983705cfb7c3bf32fe23eaa2d4a2e7438cad2251148c73]
	I1211 23:56:58.350713  273363 ssh_runner.go:195] Run: which crictl
	I1211 23:56:58.354323  273363 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 23:56:58.354398  273363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 23:56:58.396270  273363 cri.go:89] found id: "b109b488cf6dcfed2ad5d0f9c0b38e27f828da9b0aee40471d360272905098d7"
	I1211 23:56:58.396294  273363 cri.go:89] found id: ""
	I1211 23:56:58.396303  273363 logs.go:282] 1 containers: [b109b488cf6dcfed2ad5d0f9c0b38e27f828da9b0aee40471d360272905098d7]
	I1211 23:56:58.396367  273363 ssh_runner.go:195] Run: which crictl
	I1211 23:56:58.400310  273363 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 23:56:58.400421  273363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 23:56:58.439413  273363 cri.go:89] found id: "d03a9536de2615711e6aafce6d843bca16d7fde5a0440f9e84a55e33b7f3e2b5"
	I1211 23:56:58.439436  273363 cri.go:89] found id: ""
	I1211 23:56:58.439444  273363 logs.go:282] 1 containers: [d03a9536de2615711e6aafce6d843bca16d7fde5a0440f9e84a55e33b7f3e2b5]
	I1211 23:56:58.439500  273363 ssh_runner.go:195] Run: which crictl
	I1211 23:56:58.443103  273363 logs.go:123] Gathering logs for kube-scheduler [a8aa020a72093940be9978a746548762becd42c86b1a2fbbcec72c604d118bb8] ...
	I1211 23:56:58.443128  273363 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8aa020a72093940be9978a746548762becd42c86b1a2fbbcec72c604d118bb8"
	I1211 23:56:58.497076  273363 logs.go:123] Gathering logs for kube-controller-manager [b109b488cf6dcfed2ad5d0f9c0b38e27f828da9b0aee40471d360272905098d7] ...
	I1211 23:56:58.497110  273363 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b109b488cf6dcfed2ad5d0f9c0b38e27f828da9b0aee40471d360272905098d7"
	I1211 23:56:58.571113  273363 logs.go:123] Gathering logs for kindnet [d03a9536de2615711e6aafce6d843bca16d7fde5a0440f9e84a55e33b7f3e2b5] ...
	I1211 23:56:58.571152  273363 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d03a9536de2615711e6aafce6d843bca16d7fde5a0440f9e84a55e33b7f3e2b5"
	I1211 23:56:58.614912  273363 logs.go:123] Gathering logs for dmesg ...
	I1211 23:56:58.614948  273363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 23:56:58.633232  273363 logs.go:123] Gathering logs for etcd [df37df1745de7383f027e1f0be0f193bd67d66fbef8865d42ff03f2555701a30] ...
	I1211 23:56:58.633261  273363 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df37df1745de7383f027e1f0be0f193bd67d66fbef8865d42ff03f2555701a30"
	I1211 23:56:58.684598  273363 logs.go:123] Gathering logs for kube-apiserver [a7c5aafc3840bb42186d9aea30a256badd54fa94e27ee9566d8805b029ea85a9] ...
	I1211 23:56:58.684631  273363 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a7c5aafc3840bb42186d9aea30a256badd54fa94e27ee9566d8805b029ea85a9"
	I1211 23:56:58.742475  273363 logs.go:123] Gathering logs for coredns [a3933957bd198371a01dc1552aa688ea7af1eef840091608a7b42cfed0079b1f] ...
	I1211 23:56:58.742525  273363 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a3933957bd198371a01dc1552aa688ea7af1eef840091608a7b42cfed0079b1f"
	I1211 23:56:58.794029  273363 logs.go:123] Gathering logs for kube-proxy [f5b9aebd301a382d4a983705cfb7c3bf32fe23eaa2d4a2e7438cad2251148c73] ...
	I1211 23:56:58.794062  273363 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5b9aebd301a382d4a983705cfb7c3bf32fe23eaa2d4a2e7438cad2251148c73"
	I1211 23:56:58.843104  273363 logs.go:123] Gathering logs for CRI-O ...
	I1211 23:56:58.843130  273363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 23:56:58.939616  273363 logs.go:123] Gathering logs for container status ...
	I1211 23:56:58.939655  273363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 23:56:58.997188  273363 logs.go:123] Gathering logs for kubelet ...
	I1211 23:56:58.997216  273363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1211 23:56:59.079449  273363 logs.go:138] Found kubelet problem: Dec 11 23:55:07 addons-680529 kubelet[1516]: W1211 23:55:07.069258    1516 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-680529" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-680529' and this object
	W1211 23:56:59.079684  273363 logs.go:138] Found kubelet problem: Dec 11 23:55:07 addons-680529 kubelet[1516]: E1211 23:55:07.069313    1516 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-680529\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-680529' and this object" logger="UnhandledError"
	W1211 23:56:59.079866  273363 logs.go:138] Found kubelet problem: Dec 11 23:55:07 addons-680529 kubelet[1516]: W1211 23:55:07.104848    1516 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-680529" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-680529' and this object
	W1211 23:56:59.080089  273363 logs.go:138] Found kubelet problem: Dec 11 23:55:07 addons-680529 kubelet[1516]: E1211 23:55:07.104893    1516 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-680529\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-680529' and this object" logger="UnhandledError"
	W1211 23:56:59.080258  273363 logs.go:138] Found kubelet problem: Dec 11 23:55:07 addons-680529 kubelet[1516]: W1211 23:55:07.106417    1516 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-680529" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-680529' and this object
	W1211 23:56:59.080464  273363 logs.go:138] Found kubelet problem: Dec 11 23:55:07 addons-680529 kubelet[1516]: E1211 23:55:07.106469    1516 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-680529\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-680529' and this object" logger="UnhandledError"
	I1211 23:56:59.117935  273363 logs.go:123] Gathering logs for describe nodes ...
	I1211 23:56:59.117968  273363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 23:56:59.315413  273363 out.go:358] Setting ErrFile to fd 2...
	I1211 23:56:59.315441  273363 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1211 23:56:59.315500  273363 out.go:270] X Problems detected in kubelet:
	W1211 23:56:59.315513  273363 out.go:270]   Dec 11 23:55:07 addons-680529 kubelet[1516]: E1211 23:55:07.069313    1516 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-680529\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-680529' and this object" logger="UnhandledError"
	W1211 23:56:59.315521  273363 out.go:270]   Dec 11 23:55:07 addons-680529 kubelet[1516]: W1211 23:55:07.104848    1516 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-680529" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-680529' and this object
	W1211 23:56:59.315538  273363 out.go:270]   Dec 11 23:55:07 addons-680529 kubelet[1516]: E1211 23:55:07.104893    1516 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-680529\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-680529' and this object" logger="UnhandledError"
	W1211 23:56:59.315544  273363 out.go:270]   Dec 11 23:55:07 addons-680529 kubelet[1516]: W1211 23:55:07.106417    1516 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-680529" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-680529' and this object
	W1211 23:56:59.315557  273363 out.go:270]   Dec 11 23:55:07 addons-680529 kubelet[1516]: E1211 23:55:07.106469    1516 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-680529\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-680529' and this object" logger="UnhandledError"
	I1211 23:56:59.315565  273363 out.go:358] Setting ErrFile to fd 2...
	I1211 23:56:59.315571  273363 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 23:57:09.317176  273363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 23:57:09.331280  273363 api_server.go:72] duration metric: took 2m49.472318402s to wait for apiserver process to appear ...
	I1211 23:57:09.331308  273363 api_server.go:88] waiting for apiserver healthz status ...
	I1211 23:57:09.331343  273363 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 23:57:09.331402  273363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 23:57:09.378597  273363 cri.go:89] found id: "a7c5aafc3840bb42186d9aea30a256badd54fa94e27ee9566d8805b029ea85a9"
	I1211 23:57:09.378623  273363 cri.go:89] found id: ""
	I1211 23:57:09.378631  273363 logs.go:282] 1 containers: [a7c5aafc3840bb42186d9aea30a256badd54fa94e27ee9566d8805b029ea85a9]
	I1211 23:57:09.378689  273363 ssh_runner.go:195] Run: which crictl
	I1211 23:57:09.382269  273363 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 23:57:09.382343  273363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 23:57:09.423129  273363 cri.go:89] found id: "df37df1745de7383f027e1f0be0f193bd67d66fbef8865d42ff03f2555701a30"
	I1211 23:57:09.423150  273363 cri.go:89] found id: ""
	I1211 23:57:09.423158  273363 logs.go:282] 1 containers: [df37df1745de7383f027e1f0be0f193bd67d66fbef8865d42ff03f2555701a30]
	I1211 23:57:09.423216  273363 ssh_runner.go:195] Run: which crictl
	I1211 23:57:09.427199  273363 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 23:57:09.427272  273363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 23:57:09.467492  273363 cri.go:89] found id: "a3933957bd198371a01dc1552aa688ea7af1eef840091608a7b42cfed0079b1f"
	I1211 23:57:09.467516  273363 cri.go:89] found id: ""
	I1211 23:57:09.467525  273363 logs.go:282] 1 containers: [a3933957bd198371a01dc1552aa688ea7af1eef840091608a7b42cfed0079b1f]
	I1211 23:57:09.467582  273363 ssh_runner.go:195] Run: which crictl
	I1211 23:57:09.471293  273363 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 23:57:09.471370  273363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 23:57:09.513018  273363 cri.go:89] found id: "a8aa020a72093940be9978a746548762becd42c86b1a2fbbcec72c604d118bb8"
	I1211 23:57:09.513037  273363 cri.go:89] found id: ""
	I1211 23:57:09.513045  273363 logs.go:282] 1 containers: [a8aa020a72093940be9978a746548762becd42c86b1a2fbbcec72c604d118bb8]
	I1211 23:57:09.513102  273363 ssh_runner.go:195] Run: which crictl
	I1211 23:57:09.516829  273363 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 23:57:09.516901  273363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 23:57:09.559664  273363 cri.go:89] found id: "f5b9aebd301a382d4a983705cfb7c3bf32fe23eaa2d4a2e7438cad2251148c73"
	I1211 23:57:09.559683  273363 cri.go:89] found id: ""
	I1211 23:57:09.559691  273363 logs.go:282] 1 containers: [f5b9aebd301a382d4a983705cfb7c3bf32fe23eaa2d4a2e7438cad2251148c73]
	I1211 23:57:09.559745  273363 ssh_runner.go:195] Run: which crictl
	I1211 23:57:09.564724  273363 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 23:57:09.564821  273363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 23:57:09.608178  273363 cri.go:89] found id: "b109b488cf6dcfed2ad5d0f9c0b38e27f828da9b0aee40471d360272905098d7"
	I1211 23:57:09.608202  273363 cri.go:89] found id: ""
	I1211 23:57:09.608211  273363 logs.go:282] 1 containers: [b109b488cf6dcfed2ad5d0f9c0b38e27f828da9b0aee40471d360272905098d7]
	I1211 23:57:09.608269  273363 ssh_runner.go:195] Run: which crictl
	I1211 23:57:09.612621  273363 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 23:57:09.612726  273363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 23:57:09.670991  273363 cri.go:89] found id: "d03a9536de2615711e6aafce6d843bca16d7fde5a0440f9e84a55e33b7f3e2b5"
	I1211 23:57:09.671015  273363 cri.go:89] found id: ""
	I1211 23:57:09.671023  273363 logs.go:282] 1 containers: [d03a9536de2615711e6aafce6d843bca16d7fde5a0440f9e84a55e33b7f3e2b5]
	I1211 23:57:09.671084  273363 ssh_runner.go:195] Run: which crictl
	I1211 23:57:09.674493  273363 logs.go:123] Gathering logs for kube-controller-manager [b109b488cf6dcfed2ad5d0f9c0b38e27f828da9b0aee40471d360272905098d7] ...
	I1211 23:57:09.674521  273363 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b109b488cf6dcfed2ad5d0f9c0b38e27f828da9b0aee40471d360272905098d7"
	I1211 23:57:09.742051  273363 logs.go:123] Gathering logs for CRI-O ...
	I1211 23:57:09.742090  273363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 23:57:09.832554  273363 logs.go:123] Gathering logs for describe nodes ...
	I1211 23:57:09.832593  273363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 23:57:09.969424  273363 logs.go:123] Gathering logs for kube-apiserver [a7c5aafc3840bb42186d9aea30a256badd54fa94e27ee9566d8805b029ea85a9] ...
	I1211 23:57:09.969455  273363 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a7c5aafc3840bb42186d9aea30a256badd54fa94e27ee9566d8805b029ea85a9"
	I1211 23:57:10.043312  273363 logs.go:123] Gathering logs for kube-proxy [f5b9aebd301a382d4a983705cfb7c3bf32fe23eaa2d4a2e7438cad2251148c73] ...
	I1211 23:57:10.043354  273363 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5b9aebd301a382d4a983705cfb7c3bf32fe23eaa2d4a2e7438cad2251148c73"
	I1211 23:57:10.087181  273363 logs.go:123] Gathering logs for coredns [a3933957bd198371a01dc1552aa688ea7af1eef840091608a7b42cfed0079b1f] ...
	I1211 23:57:10.087213  273363 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a3933957bd198371a01dc1552aa688ea7af1eef840091608a7b42cfed0079b1f"
	I1211 23:57:10.145118  273363 logs.go:123] Gathering logs for kube-scheduler [a8aa020a72093940be9978a746548762becd42c86b1a2fbbcec72c604d118bb8] ...
	I1211 23:57:10.145154  273363 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8aa020a72093940be9978a746548762becd42c86b1a2fbbcec72c604d118bb8"
	I1211 23:57:10.208039  273363 logs.go:123] Gathering logs for kindnet [d03a9536de2615711e6aafce6d843bca16d7fde5a0440f9e84a55e33b7f3e2b5] ...
	I1211 23:57:10.208075  273363 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d03a9536de2615711e6aafce6d843bca16d7fde5a0440f9e84a55e33b7f3e2b5"
	I1211 23:57:10.254205  273363 logs.go:123] Gathering logs for container status ...
	I1211 23:57:10.254236  273363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 23:57:10.304877  273363 logs.go:123] Gathering logs for kubelet ...
	I1211 23:57:10.304907  273363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1211 23:57:10.382798  273363 logs.go:138] Found kubelet problem: Dec 11 23:55:07 addons-680529 kubelet[1516]: W1211 23:55:07.069258    1516 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-680529" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-680529' and this object
	W1211 23:57:10.383065  273363 logs.go:138] Found kubelet problem: Dec 11 23:55:07 addons-680529 kubelet[1516]: E1211 23:55:07.069313    1516 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-680529\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-680529' and this object" logger="UnhandledError"
	W1211 23:57:10.383249  273363 logs.go:138] Found kubelet problem: Dec 11 23:55:07 addons-680529 kubelet[1516]: W1211 23:55:07.104848    1516 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-680529" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-680529' and this object
	W1211 23:57:10.383471  273363 logs.go:138] Found kubelet problem: Dec 11 23:55:07 addons-680529 kubelet[1516]: E1211 23:55:07.104893    1516 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-680529\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-680529' and this object" logger="UnhandledError"
	W1211 23:57:10.383635  273363 logs.go:138] Found kubelet problem: Dec 11 23:55:07 addons-680529 kubelet[1516]: W1211 23:55:07.106417    1516 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-680529" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-680529' and this object
	W1211 23:57:10.383841  273363 logs.go:138] Found kubelet problem: Dec 11 23:55:07 addons-680529 kubelet[1516]: E1211 23:55:07.106469    1516 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-680529\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-680529' and this object" logger="UnhandledError"
	I1211 23:57:10.421815  273363 logs.go:123] Gathering logs for dmesg ...
	I1211 23:57:10.421849  273363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 23:57:10.438800  273363 logs.go:123] Gathering logs for etcd [df37df1745de7383f027e1f0be0f193bd67d66fbef8865d42ff03f2555701a30] ...
	I1211 23:57:10.438873  273363 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df37df1745de7383f027e1f0be0f193bd67d66fbef8865d42ff03f2555701a30"
	I1211 23:57:10.504585  273363 out.go:358] Setting ErrFile to fd 2...
	I1211 23:57:10.504621  273363 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1211 23:57:10.504709  273363 out.go:270] X Problems detected in kubelet:
	W1211 23:57:10.504724  273363 out.go:270]   Dec 11 23:55:07 addons-680529 kubelet[1516]: E1211 23:55:07.069313    1516 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-680529\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-680529' and this object" logger="UnhandledError"
	W1211 23:57:10.504732  273363 out.go:270]   Dec 11 23:55:07 addons-680529 kubelet[1516]: W1211 23:55:07.104848    1516 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-680529" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-680529' and this object
	W1211 23:57:10.504754  273363 out.go:270]   Dec 11 23:55:07 addons-680529 kubelet[1516]: E1211 23:55:07.104893    1516 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-680529\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-680529' and this object" logger="UnhandledError"
	W1211 23:57:10.504764  273363 out.go:270]   Dec 11 23:55:07 addons-680529 kubelet[1516]: W1211 23:55:07.106417    1516 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-680529" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-680529' and this object
	W1211 23:57:10.504770  273363 out.go:270]   Dec 11 23:55:07 addons-680529 kubelet[1516]: E1211 23:55:07.106469    1516 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-680529\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-680529' and this object" logger="UnhandledError"
	I1211 23:57:10.504781  273363 out.go:358] Setting ErrFile to fd 2...
	I1211 23:57:10.504787  273363 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 23:57:20.506748  273363 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1211 23:57:20.515102  273363 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1211 23:57:20.516136  273363 api_server.go:141] control plane version: v1.31.2
	I1211 23:57:20.516162  273363 api_server.go:131] duration metric: took 11.184846506s to wait for apiserver health ...
	I1211 23:57:20.516172  273363 system_pods.go:43] waiting for kube-system pods to appear ...
	I1211 23:57:20.516193  273363 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 23:57:20.516257  273363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 23:57:20.556860  273363 cri.go:89] found id: "a7c5aafc3840bb42186d9aea30a256badd54fa94e27ee9566d8805b029ea85a9"
	I1211 23:57:20.556885  273363 cri.go:89] found id: ""
	I1211 23:57:20.556893  273363 logs.go:282] 1 containers: [a7c5aafc3840bb42186d9aea30a256badd54fa94e27ee9566d8805b029ea85a9]
	I1211 23:57:20.556953  273363 ssh_runner.go:195] Run: which crictl
	I1211 23:57:20.560462  273363 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 23:57:20.560539  273363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 23:57:20.598091  273363 cri.go:89] found id: "df37df1745de7383f027e1f0be0f193bd67d66fbef8865d42ff03f2555701a30"
	I1211 23:57:20.598115  273363 cri.go:89] found id: ""
	I1211 23:57:20.598123  273363 logs.go:282] 1 containers: [df37df1745de7383f027e1f0be0f193bd67d66fbef8865d42ff03f2555701a30]
	I1211 23:57:20.598204  273363 ssh_runner.go:195] Run: which crictl
	I1211 23:57:20.601847  273363 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 23:57:20.601925  273363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 23:57:20.644333  273363 cri.go:89] found id: "a3933957bd198371a01dc1552aa688ea7af1eef840091608a7b42cfed0079b1f"
	I1211 23:57:20.644356  273363 cri.go:89] found id: ""
	I1211 23:57:20.644365  273363 logs.go:282] 1 containers: [a3933957bd198371a01dc1552aa688ea7af1eef840091608a7b42cfed0079b1f]
	I1211 23:57:20.644422  273363 ssh_runner.go:195] Run: which crictl
	I1211 23:57:20.648306  273363 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 23:57:20.648383  273363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 23:57:20.687325  273363 cri.go:89] found id: "a8aa020a72093940be9978a746548762becd42c86b1a2fbbcec72c604d118bb8"
	I1211 23:57:20.687350  273363 cri.go:89] found id: ""
	I1211 23:57:20.687358  273363 logs.go:282] 1 containers: [a8aa020a72093940be9978a746548762becd42c86b1a2fbbcec72c604d118bb8]
	I1211 23:57:20.687418  273363 ssh_runner.go:195] Run: which crictl
	I1211 23:57:20.691075  273363 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 23:57:20.691166  273363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 23:57:20.731502  273363 cri.go:89] found id: "f5b9aebd301a382d4a983705cfb7c3bf32fe23eaa2d4a2e7438cad2251148c73"
	I1211 23:57:20.731528  273363 cri.go:89] found id: ""
	I1211 23:57:20.731537  273363 logs.go:282] 1 containers: [f5b9aebd301a382d4a983705cfb7c3bf32fe23eaa2d4a2e7438cad2251148c73]
	I1211 23:57:20.731596  273363 ssh_runner.go:195] Run: which crictl
	I1211 23:57:20.735345  273363 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 23:57:20.735427  273363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 23:57:20.799679  273363 cri.go:89] found id: "b109b488cf6dcfed2ad5d0f9c0b38e27f828da9b0aee40471d360272905098d7"
	I1211 23:57:20.799703  273363 cri.go:89] found id: ""
	I1211 23:57:20.799713  273363 logs.go:282] 1 containers: [b109b488cf6dcfed2ad5d0f9c0b38e27f828da9b0aee40471d360272905098d7]
	I1211 23:57:20.799770  273363 ssh_runner.go:195] Run: which crictl
	I1211 23:57:20.804067  273363 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 23:57:20.804144  273363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 23:57:20.877315  273363 cri.go:89] found id: "d03a9536de2615711e6aafce6d843bca16d7fde5a0440f9e84a55e33b7f3e2b5"
	I1211 23:57:20.877340  273363 cri.go:89] found id: ""
	I1211 23:57:20.877348  273363 logs.go:282] 1 containers: [d03a9536de2615711e6aafce6d843bca16d7fde5a0440f9e84a55e33b7f3e2b5]
	I1211 23:57:20.877406  273363 ssh_runner.go:195] Run: which crictl
	I1211 23:57:20.881138  273363 logs.go:123] Gathering logs for kubelet ...
	I1211 23:57:20.881162  273363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1211 23:57:20.962683  273363 logs.go:138] Found kubelet problem: Dec 11 23:55:07 addons-680529 kubelet[1516]: W1211 23:55:07.069258    1516 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-680529" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-680529' and this object
	W1211 23:57:20.962947  273363 logs.go:138] Found kubelet problem: Dec 11 23:55:07 addons-680529 kubelet[1516]: E1211 23:55:07.069313    1516 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-680529\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-680529' and this object" logger="UnhandledError"
	W1211 23:57:20.963136  273363 logs.go:138] Found kubelet problem: Dec 11 23:55:07 addons-680529 kubelet[1516]: W1211 23:55:07.104848    1516 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-680529" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-680529' and this object
	W1211 23:57:20.963363  273363 logs.go:138] Found kubelet problem: Dec 11 23:55:07 addons-680529 kubelet[1516]: E1211 23:55:07.104893    1516 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-680529\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-680529' and this object" logger="UnhandledError"
	W1211 23:57:20.963528  273363 logs.go:138] Found kubelet problem: Dec 11 23:55:07 addons-680529 kubelet[1516]: W1211 23:55:07.106417    1516 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-680529" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-680529' and this object
	W1211 23:57:20.963732  273363 logs.go:138] Found kubelet problem: Dec 11 23:55:07 addons-680529 kubelet[1516]: E1211 23:55:07.106469    1516 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-680529\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-680529' and this object" logger="UnhandledError"
	I1211 23:57:21.002499  273363 logs.go:123] Gathering logs for describe nodes ...
	I1211 23:57:21.002529  273363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 23:57:21.148920  273363 logs.go:123] Gathering logs for kube-apiserver [a7c5aafc3840bb42186d9aea30a256badd54fa94e27ee9566d8805b029ea85a9] ...
	I1211 23:57:21.148957  273363 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a7c5aafc3840bb42186d9aea30a256badd54fa94e27ee9566d8805b029ea85a9"
	I1211 23:57:21.211526  273363 logs.go:123] Gathering logs for kube-proxy [f5b9aebd301a382d4a983705cfb7c3bf32fe23eaa2d4a2e7438cad2251148c73] ...
	I1211 23:57:21.211560  273363 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5b9aebd301a382d4a983705cfb7c3bf32fe23eaa2d4a2e7438cad2251148c73"
	I1211 23:57:21.253340  273363 logs.go:123] Gathering logs for CRI-O ...
	I1211 23:57:21.253369  273363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 23:57:21.350378  273363 logs.go:123] Gathering logs for kindnet [d03a9536de2615711e6aafce6d843bca16d7fde5a0440f9e84a55e33b7f3e2b5] ...
	I1211 23:57:21.350413  273363 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d03a9536de2615711e6aafce6d843bca16d7fde5a0440f9e84a55e33b7f3e2b5"
	I1211 23:57:21.395975  273363 logs.go:123] Gathering logs for container status ...
	I1211 23:57:21.396002  273363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 23:57:21.445282  273363 logs.go:123] Gathering logs for dmesg ...
	I1211 23:57:21.445311  273363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 23:57:21.461131  273363 logs.go:123] Gathering logs for etcd [df37df1745de7383f027e1f0be0f193bd67d66fbef8865d42ff03f2555701a30] ...
	I1211 23:57:21.461161  273363 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df37df1745de7383f027e1f0be0f193bd67d66fbef8865d42ff03f2555701a30"
	I1211 23:57:21.511518  273363 logs.go:123] Gathering logs for coredns [a3933957bd198371a01dc1552aa688ea7af1eef840091608a7b42cfed0079b1f] ...
	I1211 23:57:21.511553  273363 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a3933957bd198371a01dc1552aa688ea7af1eef840091608a7b42cfed0079b1f"
	I1211 23:57:21.553703  273363 logs.go:123] Gathering logs for kube-scheduler [a8aa020a72093940be9978a746548762becd42c86b1a2fbbcec72c604d118bb8] ...
	I1211 23:57:21.553736  273363 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8aa020a72093940be9978a746548762becd42c86b1a2fbbcec72c604d118bb8"
	I1211 23:57:21.613757  273363 logs.go:123] Gathering logs for kube-controller-manager [b109b488cf6dcfed2ad5d0f9c0b38e27f828da9b0aee40471d360272905098d7] ...
	I1211 23:57:21.613790  273363 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b109b488cf6dcfed2ad5d0f9c0b38e27f828da9b0aee40471d360272905098d7"
	I1211 23:57:21.686035  273363 out.go:358] Setting ErrFile to fd 2...
	I1211 23:57:21.686067  273363 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1211 23:57:21.686125  273363 out.go:270] X Problems detected in kubelet:
	W1211 23:57:21.686151  273363 out.go:270]   Dec 11 23:55:07 addons-680529 kubelet[1516]: E1211 23:55:07.069313    1516 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-680529\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-680529' and this object" logger="UnhandledError"
	W1211 23:57:21.686159  273363 out.go:270]   Dec 11 23:55:07 addons-680529 kubelet[1516]: W1211 23:55:07.104848    1516 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-680529" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-680529' and this object
	W1211 23:57:21.686168  273363 out.go:270]   Dec 11 23:55:07 addons-680529 kubelet[1516]: E1211 23:55:07.104893    1516 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-680529\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-680529' and this object" logger="UnhandledError"
	W1211 23:57:21.686174  273363 out.go:270]   Dec 11 23:55:07 addons-680529 kubelet[1516]: W1211 23:55:07.106417    1516 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-680529" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-680529' and this object
	W1211 23:57:21.686181  273363 out.go:270]   Dec 11 23:55:07 addons-680529 kubelet[1516]: E1211 23:55:07.106469    1516 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-680529\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-680529' and this object" logger="UnhandledError"
	I1211 23:57:21.686192  273363 out.go:358] Setting ErrFile to fd 2...
	I1211 23:57:21.686198  273363 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 23:57:31.699507  273363 system_pods.go:59] 18 kube-system pods found
	I1211 23:57:31.699551  273363 system_pods.go:61] "coredns-7c65d6cfc9-ltfkm" [552c2c98-c09f-4851-86f5-93ea3c60d6b8] Running
	I1211 23:57:31.699559  273363 system_pods.go:61] "csi-hostpath-attacher-0" [1ff2269a-08fe-4383-be8d-d46a2b31efe3] Running
	I1211 23:57:31.699564  273363 system_pods.go:61] "csi-hostpath-resizer-0" [5a0bf1bf-83c1-460e-a41a-9950bfd8c409] Running
	I1211 23:57:31.699568  273363 system_pods.go:61] "csi-hostpathplugin-ltfzd" [472dd4a7-f472-4ea4-a78e-aff7da5aa7d5] Running
	I1211 23:57:31.699572  273363 system_pods.go:61] "etcd-addons-680529" [29c7c556-8282-42d4-8d66-b29a6d066eb7] Running
	I1211 23:57:31.699578  273363 system_pods.go:61] "kindnet-5n8x6" [fa640b02-6bf5-46fd-8c97-9292f66f15bb] Running
	I1211 23:57:31.699582  273363 system_pods.go:61] "kube-apiserver-addons-680529" [261548ca-d14b-4b3f-bffc-b8cc7f62f7cd] Running
	I1211 23:57:31.699586  273363 system_pods.go:61] "kube-controller-manager-addons-680529" [ea32f6bf-c0ae-4080-b39e-64568e70204f] Running
	I1211 23:57:31.699591  273363 system_pods.go:61] "kube-ingress-dns-minikube" [e15ef8b4-426e-4564-b396-6c78ba49bfbf] Running
	I1211 23:57:31.699595  273363 system_pods.go:61] "kube-proxy-rl6lb" [46b9b123-b304-41dc-8f4b-94ede15fd378] Running
	I1211 23:57:31.699600  273363 system_pods.go:61] "kube-scheduler-addons-680529" [f05e82d8-6388-4d24-8ce3-b77be14393b5] Running
	I1211 23:57:31.699604  273363 system_pods.go:61] "metrics-server-84c5f94fbc-c68dp" [09bd89d6-eb8c-4252-ae07-4d3b5b855169] Running
	I1211 23:57:31.699608  273363 system_pods.go:61] "nvidia-device-plugin-daemonset-pcmmw" [165e1834-cab1-404d-bc96-38a766c51940] Running
	I1211 23:57:31.699642  273363 system_pods.go:61] "registry-5cc95cd69-xnkxj" [13f2d3d8-1d08-41f1-80e2-d19e09a1c46d] Running
	I1211 23:57:31.699677  273363 system_pods.go:61] "registry-proxy-f2dfg" [79eadeb8-583a-4e72-87f2-bd4c865a9319] Running
	I1211 23:57:31.699708  273363 system_pods.go:61] "snapshot-controller-56fcc65765-9bmsg" [849b66e3-659e-432e-88d7-97ec947ba293] Running
	I1211 23:57:31.699742  273363 system_pods.go:61] "snapshot-controller-56fcc65765-gcl6n" [b1322253-a509-4079-a8ef-a53886d23acf] Running
	I1211 23:57:31.699771  273363 system_pods.go:61] "storage-provisioner" [a2973b5d-d765-4e68-ad3c-31a62ab3399d] Running
	I1211 23:57:31.699801  273363 system_pods.go:74] duration metric: took 11.183622304s to wait for pod list to return data ...
	I1211 23:57:31.699822  273363 default_sa.go:34] waiting for default service account to be created ...
	I1211 23:57:31.702650  273363 default_sa.go:45] found service account: "default"
	I1211 23:57:31.702678  273363 default_sa.go:55] duration metric: took 2.832258ms for default service account to be created ...
	I1211 23:57:31.702689  273363 system_pods.go:116] waiting for k8s-apps to be running ...
	I1211 23:57:31.714502  273363 system_pods.go:86] 18 kube-system pods found
	I1211 23:57:31.714543  273363 system_pods.go:89] "coredns-7c65d6cfc9-ltfkm" [552c2c98-c09f-4851-86f5-93ea3c60d6b8] Running
	I1211 23:57:31.714557  273363 system_pods.go:89] "csi-hostpath-attacher-0" [1ff2269a-08fe-4383-be8d-d46a2b31efe3] Running
	I1211 23:57:31.714562  273363 system_pods.go:89] "csi-hostpath-resizer-0" [5a0bf1bf-83c1-460e-a41a-9950bfd8c409] Running
	I1211 23:57:31.714568  273363 system_pods.go:89] "csi-hostpathplugin-ltfzd" [472dd4a7-f472-4ea4-a78e-aff7da5aa7d5] Running
	I1211 23:57:31.714573  273363 system_pods.go:89] "etcd-addons-680529" [29c7c556-8282-42d4-8d66-b29a6d066eb7] Running
	I1211 23:57:31.714583  273363 system_pods.go:89] "kindnet-5n8x6" [fa640b02-6bf5-46fd-8c97-9292f66f15bb] Running
	I1211 23:57:31.714591  273363 system_pods.go:89] "kube-apiserver-addons-680529" [261548ca-d14b-4b3f-bffc-b8cc7f62f7cd] Running
	I1211 23:57:31.714597  273363 system_pods.go:89] "kube-controller-manager-addons-680529" [ea32f6bf-c0ae-4080-b39e-64568e70204f] Running
	I1211 23:57:31.714607  273363 system_pods.go:89] "kube-ingress-dns-minikube" [e15ef8b4-426e-4564-b396-6c78ba49bfbf] Running
	I1211 23:57:31.714616  273363 system_pods.go:89] "kube-proxy-rl6lb" [46b9b123-b304-41dc-8f4b-94ede15fd378] Running
	I1211 23:57:31.714624  273363 system_pods.go:89] "kube-scheduler-addons-680529" [f05e82d8-6388-4d24-8ce3-b77be14393b5] Running
	I1211 23:57:31.714631  273363 system_pods.go:89] "metrics-server-84c5f94fbc-c68dp" [09bd89d6-eb8c-4252-ae07-4d3b5b855169] Running
	I1211 23:57:31.714636  273363 system_pods.go:89] "nvidia-device-plugin-daemonset-pcmmw" [165e1834-cab1-404d-bc96-38a766c51940] Running
	I1211 23:57:31.714641  273363 system_pods.go:89] "registry-5cc95cd69-xnkxj" [13f2d3d8-1d08-41f1-80e2-d19e09a1c46d] Running
	I1211 23:57:31.714653  273363 system_pods.go:89] "registry-proxy-f2dfg" [79eadeb8-583a-4e72-87f2-bd4c865a9319] Running
	I1211 23:57:31.714659  273363 system_pods.go:89] "snapshot-controller-56fcc65765-9bmsg" [849b66e3-659e-432e-88d7-97ec947ba293] Running
	I1211 23:57:31.714664  273363 system_pods.go:89] "snapshot-controller-56fcc65765-gcl6n" [b1322253-a509-4079-a8ef-a53886d23acf] Running
	I1211 23:57:31.714668  273363 system_pods.go:89] "storage-provisioner" [a2973b5d-d765-4e68-ad3c-31a62ab3399d] Running
	I1211 23:57:31.714676  273363 system_pods.go:126] duration metric: took 11.981258ms to wait for k8s-apps to be running ...
	I1211 23:57:31.714691  273363 system_svc.go:44] waiting for kubelet service to be running ....
	I1211 23:57:31.714753  273363 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1211 23:57:31.728985  273363 system_svc.go:56] duration metric: took 14.286576ms WaitForService to wait for kubelet
	I1211 23:57:31.729029  273363 kubeadm.go:582] duration metric: took 3m11.8700725s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1211 23:57:31.729054  273363 node_conditions.go:102] verifying NodePressure condition ...
	I1211 23:57:31.732455  273363 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1211 23:57:31.732486  273363 node_conditions.go:123] node cpu capacity is 2
	I1211 23:57:31.732500  273363 node_conditions.go:105] duration metric: took 3.433981ms to run NodePressure ...
	I1211 23:57:31.732514  273363 start.go:241] waiting for startup goroutines ...
	I1211 23:57:31.732521  273363 start.go:246] waiting for cluster config update ...
	I1211 23:57:31.732560  273363 start.go:255] writing updated cluster config ...
	I1211 23:57:31.732859  273363 ssh_runner.go:195] Run: rm -f paused
	I1211 23:57:32.187698  273363 start.go:600] kubectl: 1.32.0, cluster: 1.31.2 (minor skew: 1)
	I1211 23:57:32.189304  273363 out.go:177] * Done! kubectl is now configured to use "addons-680529" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 11 23:59:14 addons-680529 crio[978]: time="2024-12-11 23:59:14.756123818Z" level=info msg="Removed pod sandbox: 5d10c4448b78dabb3790b841ba2130f7c397ad838b89e9bd5fdd805316ccd0a5" id=ced205ec-098e-4973-b42a-f15180d34d60 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 12 00:01:22 addons-680529 crio[978]: time="2024-12-12 00:01:22.441567743Z" level=info msg="Running pod sandbox: default/hello-world-app-55bf9c44b4-jqgpm/POD" id=630a115a-a4ac-4db7-8d8c-c454f124388b name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 12 00:01:22 addons-680529 crio[978]: time="2024-12-12 00:01:22.441631667Z" level=warning msg="Allowed annotations are specified for workload []"
	Dec 12 00:01:22 addons-680529 crio[978]: time="2024-12-12 00:01:22.471115416Z" level=info msg="Got pod network &{Name:hello-world-app-55bf9c44b4-jqgpm Namespace:default ID:bdb21005531147a928714a3aeb8906bb9eaf6f22bdd291c63d7bdba955c5e9ab UID:55fc2746-7c6d-4d31-9b2e-59ad76de89c3 NetNS:/var/run/netns/e6db33e2-05ab-4973-bd62-9b23b4ef718b Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Dec 12 00:01:22 addons-680529 crio[978]: time="2024-12-12 00:01:22.471161494Z" level=info msg="Adding pod default_hello-world-app-55bf9c44b4-jqgpm to CNI network \"kindnet\" (type=ptp)"
	Dec 12 00:01:22 addons-680529 crio[978]: time="2024-12-12 00:01:22.492546253Z" level=info msg="Got pod network &{Name:hello-world-app-55bf9c44b4-jqgpm Namespace:default ID:bdb21005531147a928714a3aeb8906bb9eaf6f22bdd291c63d7bdba955c5e9ab UID:55fc2746-7c6d-4d31-9b2e-59ad76de89c3 NetNS:/var/run/netns/e6db33e2-05ab-4973-bd62-9b23b4ef718b Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Dec 12 00:01:22 addons-680529 crio[978]: time="2024-12-12 00:01:22.492698911Z" level=info msg="Checking pod default_hello-world-app-55bf9c44b4-jqgpm for CNI network kindnet (type=ptp)"
	Dec 12 00:01:22 addons-680529 crio[978]: time="2024-12-12 00:01:22.495806638Z" level=info msg="Ran pod sandbox bdb21005531147a928714a3aeb8906bb9eaf6f22bdd291c63d7bdba955c5e9ab with infra container: default/hello-world-app-55bf9c44b4-jqgpm/POD" id=630a115a-a4ac-4db7-8d8c-c454f124388b name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 12 00:01:22 addons-680529 crio[978]: time="2024-12-12 00:01:22.502397066Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=1cc06b3b-8f7b-449d-a5b2-15bcdc65bc1a name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:01:22 addons-680529 crio[978]: time="2024-12-12 00:01:22.502633044Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=1cc06b3b-8f7b-449d-a5b2-15bcdc65bc1a name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:01:22 addons-680529 crio[978]: time="2024-12-12 00:01:22.503587233Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=f096eeb9-ad8d-43bd-b075-4c6d54a6e631 name=/runtime.v1.ImageService/PullImage
	Dec 12 00:01:22 addons-680529 crio[978]: time="2024-12-12 00:01:22.506919254Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Dec 12 00:01:22 addons-680529 crio[978]: time="2024-12-12 00:01:22.761547086Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Dec 12 00:01:23 addons-680529 crio[978]: time="2024-12-12 00:01:23.502966428Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6" id=f096eeb9-ad8d-43bd-b075-4c6d54a6e631 name=/runtime.v1.ImageService/PullImage
	Dec 12 00:01:23 addons-680529 crio[978]: time="2024-12-12 00:01:23.503903486Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=2a91c6b1-5ad7-47b3-861e-448013a48a03 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:01:23 addons-680529 crio[978]: time="2024-12-12 00:01:23.504546436Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b],Size_:4789170,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=2a91c6b1-5ad7-47b3-861e-448013a48a03 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:01:23 addons-680529 crio[978]: time="2024-12-12 00:01:23.507842109Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=bee327c6-1ba6-4d0e-9e24-b09e47da0755 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:01:23 addons-680529 crio[978]: time="2024-12-12 00:01:23.509607660Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b],Size_:4789170,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=bee327c6-1ba6-4d0e-9e24-b09e47da0755 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 00:01:23 addons-680529 crio[978]: time="2024-12-12 00:01:23.510600149Z" level=info msg="Creating container: default/hello-world-app-55bf9c44b4-jqgpm/hello-world-app" id=b78da662-b14d-4365-a7f2-82bac1c5aaa6 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 00:01:23 addons-680529 crio[978]: time="2024-12-12 00:01:23.510698459Z" level=warning msg="Allowed annotations are specified for workload []"
	Dec 12 00:01:23 addons-680529 crio[978]: time="2024-12-12 00:01:23.526908803Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/c99ab735ee5bb9eaef1f631c77f8fa75dc3386103c1dd7b950db4feda9433e53/merged/etc/passwd: no such file or directory"
	Dec 12 00:01:23 addons-680529 crio[978]: time="2024-12-12 00:01:23.527112962Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/c99ab735ee5bb9eaef1f631c77f8fa75dc3386103c1dd7b950db4feda9433e53/merged/etc/group: no such file or directory"
	Dec 12 00:01:23 addons-680529 crio[978]: time="2024-12-12 00:01:23.586135525Z" level=info msg="Created container 5290d168b14da0699e7f2805fa6643c6096df0fb00629360a6a30b630c9db577: default/hello-world-app-55bf9c44b4-jqgpm/hello-world-app" id=b78da662-b14d-4365-a7f2-82bac1c5aaa6 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 00:01:23 addons-680529 crio[978]: time="2024-12-12 00:01:23.586926661Z" level=info msg="Starting container: 5290d168b14da0699e7f2805fa6643c6096df0fb00629360a6a30b630c9db577" id=780bddfe-a552-4bf3-a84e-96dff23367e1 name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 00:01:23 addons-680529 crio[978]: time="2024-12-12 00:01:23.592607017Z" level=info msg="Started container" PID=8624 containerID=5290d168b14da0699e7f2805fa6643c6096df0fb00629360a6a30b630c9db577 description=default/hello-world-app-55bf9c44b4-jqgpm/hello-world-app id=780bddfe-a552-4bf3-a84e-96dff23367e1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=bdb21005531147a928714a3aeb8906bb9eaf6f22bdd291c63d7bdba955c5e9ab
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED                  STATE               NAME                       ATTEMPT             POD ID              POD
	5290d168b14da       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        Less than a second ago   Running             hello-world-app            0                   bdb2100553114       hello-world-app-55bf9c44b4-jqgpm
	f17aefd0e9da0       docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4                              2 minutes ago            Running             nginx                      0                   887fe82cdd28f       nginx
	3904c932956c7       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago            Running             busybox                    0                   d082248e224a4       busybox
	5aa64d451535e       registry.k8s.io/ingress-nginx/controller@sha256:787a5408fa511266888b2e765f9666bee67d9bf2518a6b7cfd4ab6cc01c22eee             5 minutes ago            Running             controller                 0                   49617e3f84cca       ingress-nginx-controller-5f85ff4588-jqqts
	caa2545993e42       gcr.io/cloud-spanner-emulator/emulator@sha256:7cf2be1ac85c39a0c5b34185b6c3d0ea479269f5c8ecc785713308f93194ca27               5 minutes ago            Running             cloud-spanner-emulator     0                   73e881f093594       cloud-spanner-emulator-dc5db94f4-s2gl8
	95fb6601ca7a5       registry.k8s.io/metrics-server/metrics-server@sha256:048bcf48fc2cce517a61777e22bac782ba59ea5e9b9a54bcb42dbee99566a91f        5 minutes ago            Running             metrics-server             0                   390af4ded9379       metrics-server-84c5f94fbc-c68dp
	ca29fdca66f3c       nvcr.io/nvidia/k8s-device-plugin@sha256:7089559ce6153018806857f5049085bae15b3bf6f1c8bd19d8b12f707d087dea                     5 minutes ago            Running             nvidia-device-plugin-ctr   0                   dc91af8a20282       nvidia-device-plugin-daemonset-pcmmw
	2476eb344b197       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:0550b75a965592f1dde3fbeaa98f67a1e10c5a086bcd69a29054cc4edcb56771   5 minutes ago            Exited              patch                      0                   0d2232472ef89       ingress-nginx-admission-patch-8jgnc
	4aa3063bdb875       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:0550b75a965592f1dde3fbeaa98f67a1e10c5a086bcd69a29054cc4edcb56771   5 minutes ago            Exited              create                     0                   2e61cfe55db98       ingress-nginx-admission-create-2wwm9
	488478d8f00fa       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98             5 minutes ago            Running             local-path-provisioner     0                   f4194d217e3b5       local-path-provisioner-86d989889c-jw8w6
	8e2a51f6536c0       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                              5 minutes ago            Running             yakd                       0                   88683e105cd53       yakd-dashboard-67d98fc6b-sknr2
	157ba2869e9fe       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c             5 minutes ago            Running             minikube-ingress-dns       0                   c5102ef9f561c       kube-ingress-dns-minikube
	a3933957bd198       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4                                                             6 minutes ago            Running             coredns                    0                   a07880fe8f66c       coredns-7c65d6cfc9-ltfkm
	7490f08ddea6e       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             6 minutes ago            Running             storage-provisioner        0                   4f0dd0feacea7       storage-provisioner
	d03a9536de261       docker.io/kindest/kindnetd@sha256:de216f6245e142905c8022d424959a65f798fcd26f5b7492b9c0b0391d735c3e                           6 minutes ago            Running             kindnet-cni                0                   f9313f7a66b9f       kindnet-5n8x6
	f5b9aebd301a3       021d2420133054f8835987db659750ff639ab6863776460264dd8025c06644ba                                                             7 minutes ago            Running             kube-proxy                 0                   3cc29d3eca682       kube-proxy-rl6lb
	a7c5aafc3840b       f9c26480f1e722a7d05d7f1bb339180b19f941b23bcc928208e362df04a61270                                                             7 minutes ago            Running             kube-apiserver             0                   0dae4ef4a0f5c       kube-apiserver-addons-680529
	a8aa020a72093       d6b061e73ae454743cbfe0e3479aa23e4ed65c61d38b4408a1e7f3d3859dda8a                                                             7 minutes ago            Running             kube-scheduler             0                   5a8f6d9e80745       kube-scheduler-addons-680529
	b109b488cf6dc       9404aea098d9e80cb648d86c07d56130a1fe875ed7c2526251c2ae68a9bf07ba                                                             7 minutes ago            Running             kube-controller-manager    0                   3d88ef9153c55       kube-controller-manager-addons-680529
	df37df1745de7       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da                                                             7 minutes ago            Running             etcd                       0                   9e281b2641fe3       etcd-addons-680529
	
	
	==> coredns [a3933957bd198371a01dc1552aa688ea7af1eef840091608a7b42cfed0079b1f] <==
	[INFO] 10.244.0.3:52086 - 17834 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.001526802s
	[INFO] 10.244.0.3:52086 - 27376 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000068003s
	[INFO] 10.244.0.3:52086 - 4556 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000048943s
	[INFO] 10.244.0.3:43692 - 48838 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000110464s
	[INFO] 10.244.0.3:43692 - 48592 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000248973s
	[INFO] 10.244.0.3:59035 - 29108 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000110858s
	[INFO] 10.244.0.3:59035 - 28903 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000172165s
	[INFO] 10.244.0.3:54490 - 44041 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000052701s
	[INFO] 10.244.0.3:54490 - 43844 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000041559s
	[INFO] 10.244.0.3:57014 - 3400 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00139425s
	[INFO] 10.244.0.3:57014 - 3622 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001576155s
	[INFO] 10.244.0.3:42325 - 20046 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000071171s
	[INFO] 10.244.0.3:42325 - 19865 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000108896s
	[INFO] 10.244.0.21:34832 - 64374 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000167841s
	[INFO] 10.244.0.21:34287 - 47550 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000110078s
	[INFO] 10.244.0.21:51638 - 4146 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000143916s
	[INFO] 10.244.0.21:43029 - 22702 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000203549s
	[INFO] 10.244.0.21:49084 - 62101 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000168506s
	[INFO] 10.244.0.21:52465 - 15182 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000118283s
	[INFO] 10.244.0.21:49233 - 41176 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002288373s
	[INFO] 10.244.0.21:51947 - 39992 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001826038s
	[INFO] 10.244.0.21:42498 - 29029 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.000841546s
	[INFO] 10.244.0.21:35059 - 17723 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000726388s
	[INFO] 10.244.0.24:56014 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000190315s
	[INFO] 10.244.0.24:42384 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000241268s
	
	
	==> describe nodes <==
	Name:               addons-680529
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-680529
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7fa9733e2a94b345defa62bfa3d9dc225cfbd458
	                    minikube.k8s.io/name=addons-680529
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_11T23_54_15_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-680529
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 11 Dec 2024 23:54:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-680529
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 12 Dec 2024 00:01:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 11 Dec 2024 23:59:21 +0000   Wed, 11 Dec 2024 23:54:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 11 Dec 2024 23:59:21 +0000   Wed, 11 Dec 2024 23:54:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 11 Dec 2024 23:59:21 +0000   Wed, 11 Dec 2024 23:54:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 11 Dec 2024 23:59:21 +0000   Wed, 11 Dec 2024 23:55:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-680529
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc891a14bf004a35b657861066d42169
	  System UUID:                0af98c1a-d97e-4b29-afb3-458739a2719a
	  Boot ID:                    841b5c7a-a318-4122-9975-963f80741cc3
	  Kernel Version:             5.15.0-1072-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (18 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m52s
	  default                     cloud-spanner-emulator-dc5db94f4-s2gl8       0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m
	  default                     hello-world-app-55bf9c44b4-jqgpm             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  ingress-nginx               ingress-nginx-controller-5f85ff4588-jqqts    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         6m58s
	  kube-system                 coredns-7c65d6cfc9-ltfkm                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     7m5s
	  kube-system                 etcd-addons-680529                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         7m10s
	  kube-system                 kindnet-5n8x6                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      7m5s
	  kube-system                 kube-apiserver-addons-680529                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         7m10s
	  kube-system                 kube-controller-manager-addons-680529        200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m10s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m59s
	  kube-system                 kube-proxy-rl6lb                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m5s
	  kube-system                 kube-scheduler-addons-680529                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m10s
	  kube-system                 metrics-server-84c5f94fbc-c68dp              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         6m59s
	  kube-system                 nvidia-device-plugin-daemonset-pcmmw         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m17s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m58s
	  local-path-storage          local-path-provisioner-86d989889c-jw8w6      0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m58s
	  yakd-dashboard              yakd-dashboard-67d98fc6b-sknr2               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     6m58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 6m57s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  7m17s (x8 over 7m17s)  kubelet          Node addons-680529 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m17s (x8 over 7m17s)  kubelet          Node addons-680529 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m17s (x7 over 7m17s)  kubelet          Node addons-680529 status is now: NodeHasSufficientPID
	  Normal   Starting                 7m10s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 7m10s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  7m10s                  kubelet          Node addons-680529 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m10s                  kubelet          Node addons-680529 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m10s                  kubelet          Node addons-680529 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m6s                   node-controller  Node addons-680529 event: Registered Node addons-680529 in Controller
	  Normal   NodeReady                6m17s                  kubelet          Node addons-680529 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec11 22:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014241] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.484923] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027949] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.031181] systemd[1]: /lib/systemd/system/cloud-init.service:20: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.017950] systemd[1]: /lib/systemd/system/cloud-init-hotplugd.socket:11: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.643593] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +5.899190] kauditd_printk_skb: 36 callbacks suppressed
	[Dec11 23:00] hrtimer: interrupt took 6733940 ns
	[Dec11 23:22] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [df37df1745de7383f027e1f0be0f193bd67d66fbef8865d42ff03f2555701a30] <==
	{"level":"warn","ts":"2024-12-11T23:54:21.370524Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"203.50314ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/coredns\" ","response":"range_response_count:1 size:612"}
	{"level":"info","ts":"2024-12-11T23:54:21.379520Z","caller":"traceutil/trace.go:171","msg":"trace[444463860] range","detail":"{range_begin:/registry/configmaps/kube-system/coredns; range_end:; response_count:1; response_revision:385; }","duration":"212.453911ms","start":"2024-12-11T23:54:21.166987Z","end":"2024-12-11T23:54:21.379441Z","steps":["trace[444463860] 'agreement among raft nodes before linearized reading'  (duration: 203.423741ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-11T23:54:21.366374Z","caller":"traceutil/trace.go:171","msg":"trace[352781222] linearizableReadLoop","detail":"{readStateIndex:395; appliedIndex:394; }","duration":"199.3597ms","start":"2024-12-11T23:54:21.166992Z","end":"2024-12-11T23:54:21.366352Z","steps":["trace[352781222] 'read index received'  (duration: 11.770343ms)","trace[352781222] 'applied index is now lower than readState.Index'  (duration: 187.58409ms)"],"step_count":2}
	{"level":"warn","ts":"2024-12-11T23:54:21.653775Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"475.016139ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-11T23:54:21.654614Z","caller":"traceutil/trace.go:171","msg":"trace[1203767566] range","detail":"{range_begin:/registry/serviceaccounts; range_end:; response_count:0; response_revision:385; }","duration":"475.867538ms","start":"2024-12-11T23:54:21.178727Z","end":"2024-12-11T23:54:21.654594Z","steps":["trace[1203767566] 'agreement among raft nodes before linearized reading'  (duration: 262.674349ms)","trace[1203767566] 'range keys from in-memory index tree'  (duration: 189.384529ms)","trace[1203767566] 'filter and sort the key-value pairs'  (duration: 22.937011ms)"],"step_count":3}
	{"level":"warn","ts":"2024-12-11T23:54:21.654936Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-11T23:54:21.178686Z","time spent":"476.230092ms","remote":"127.0.0.1:59230","response type":"/etcdserverpb.KV/Range","request count":0,"request size":29,"response count":0,"response size":29,"request content":"key:\"/registry/serviceaccounts\" limit:1 "}
	{"level":"info","ts":"2024-12-11T23:54:21.770483Z","caller":"traceutil/trace.go:171","msg":"trace[1290719041] transaction","detail":"{read_only:false; response_revision:386; number_of_response:1; }","duration":"122.008564ms","start":"2024-12-11T23:54:21.648459Z","end":"2024-12-11T23:54:21.770468Z","steps":["trace[1290719041] 'process raft request'  (duration: 121.907577ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-11T23:54:23.336671Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.513976ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"warn","ts":"2024-12-11T23:54:23.379068Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"120.91419ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128033846599658909 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/deployments/default/cloud-spanner-emulator\" mod_revision:0 > success:<request_put:<key:\"/registry/deployments/default/cloud-spanner-emulator\" value_size:2570 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-12-11T23:54:23.385475Z","caller":"traceutil/trace.go:171","msg":"trace[1701703965] transaction","detail":"{read_only:false; response_revision:393; number_of_response:1; }","duration":"172.195723ms","start":"2024-12-11T23:54:23.213253Z","end":"2024-12-11T23:54:23.385449Z","steps":["trace[1701703965] 'compare'  (duration: 62.593774ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-11T23:54:23.390910Z","caller":"traceutil/trace.go:171","msg":"trace[17707196] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:392; }","duration":"177.755625ms","start":"2024-12-11T23:54:23.213130Z","end":"2024-12-11T23:54:23.390886Z","steps":["trace[17707196] 'range keys from in-memory index tree'  (duration: 123.355292ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-11T23:54:23.392941Z","caller":"traceutil/trace.go:171","msg":"trace[1818917909] transaction","detail":"{read_only:false; response_revision:395; number_of_response:1; }","duration":"139.273538ms","start":"2024-12-11T23:54:23.253656Z","end":"2024-12-11T23:54:23.392929Z","steps":["trace[1818917909] 'process raft request'  (duration: 139.051675ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-11T23:54:23.393100Z","caller":"traceutil/trace.go:171","msg":"trace[810089605] transaction","detail":"{read_only:false; response_revision:394; number_of_response:1; }","duration":"178.740937ms","start":"2024-12-11T23:54:23.214352Z","end":"2024-12-11T23:54:23.393092Z","steps":["trace[810089605] 'process raft request'  (duration: 171.0837ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-11T23:54:23.577334Z","caller":"traceutil/trace.go:171","msg":"trace[1751098335] transaction","detail":"{read_only:false; response_revision:396; number_of_response:1; }","duration":"111.770037ms","start":"2024-12-11T23:54:23.465552Z","end":"2024-12-11T23:54:23.577322Z","steps":["trace[1751098335] 'process raft request'  (duration: 111.661091ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-11T23:54:24.850250Z","caller":"traceutil/trace.go:171","msg":"trace[1166015492] transaction","detail":"{read_only:false; response_revision:440; number_of_response:1; }","duration":"143.11542ms","start":"2024-12-11T23:54:24.707074Z","end":"2024-12-11T23:54:24.850189Z","steps":["trace[1166015492] 'process raft request'  (duration: 128.523134ms)","trace[1166015492] 'compare'  (duration: 14.491094ms)"],"step_count":2}
	{"level":"info","ts":"2024-12-11T23:54:24.911167Z","caller":"traceutil/trace.go:171","msg":"trace[1052296647] transaction","detail":"{read_only:false; response_revision:441; number_of_response:1; }","duration":"196.419435ms","start":"2024-12-11T23:54:24.714735Z","end":"2024-12-11T23:54:24.911154Z","steps":["trace[1052296647] 'process raft request'  (duration: 196.309036ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-11T23:54:25.055043Z","caller":"traceutil/trace.go:171","msg":"trace[214184549] transaction","detail":"{read_only:false; response_revision:448; number_of_response:1; }","duration":"130.769497ms","start":"2024-12-11T23:54:24.924254Z","end":"2024-12-11T23:54:25.055023Z","steps":["trace[214184549] 'process raft request'  (duration: 83.648791ms)","trace[214184549] 'compare'  (duration: 46.89173ms)"],"step_count":2}
	{"level":"info","ts":"2024-12-11T23:54:25.056665Z","caller":"traceutil/trace.go:171","msg":"trace[1812331807] transaction","detail":"{read_only:false; response_revision:449; number_of_response:1; }","duration":"128.351936ms","start":"2024-12-11T23:54:24.928287Z","end":"2024-12-11T23:54:25.056639Z","steps":["trace[1812331807] 'process raft request'  (duration: 126.660862ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-11T23:54:25.056818Z","caller":"traceutil/trace.go:171","msg":"trace[625613241] linearizableReadLoop","detail":"{readStateIndex:461; appliedIndex:459; }","duration":"109.103373ms","start":"2024-12-11T23:54:24.947684Z","end":"2024-12-11T23:54:25.056788Z","steps":["trace[625613241] 'read index received'  (duration: 60.17445ms)","trace[625613241] 'applied index is now lower than readState.Index'  (duration: 48.926716ms)"],"step_count":2}
	{"level":"info","ts":"2024-12-11T23:54:25.057023Z","caller":"traceutil/trace.go:171","msg":"trace[1337573269] transaction","detail":"{read_only:false; response_revision:450; number_of_response:1; }","duration":"109.260917ms","start":"2024-12-11T23:54:24.947753Z","end":"2024-12-11T23:54:25.057014Z","steps":["trace[1337573269] 'process raft request'  (duration: 108.181517ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-11T23:54:25.057096Z","caller":"traceutil/trace.go:171","msg":"trace[864608425] transaction","detail":"{read_only:false; response_revision:451; number_of_response:1; }","duration":"109.276613ms","start":"2024-12-11T23:54:24.947808Z","end":"2024-12-11T23:54:25.057084Z","steps":["trace[864608425] 'process raft request'  (duration: 108.295764ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-11T23:54:25.057504Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.79375ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/roles/kube-system/system:persistent-volume-provisioner\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-11T23:54:25.058062Z","caller":"traceutil/trace.go:171","msg":"trace[1044297474] range","detail":"{range_begin:/registry/roles/kube-system/system:persistent-volume-provisioner; range_end:; response_count:0; response_revision:454; }","duration":"110.368263ms","start":"2024-12-11T23:54:24.947680Z","end":"2024-12-11T23:54:25.058048Z","steps":["trace[1044297474] 'agreement among raft nodes before linearized reading'  (duration: 109.775091ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-11T23:54:25.347266Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.395011ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-680529\" ","response":"range_response_count:1 size:5745"}
	{"level":"info","ts":"2024-12-11T23:54:25.354068Z","caller":"traceutil/trace.go:171","msg":"trace[140567419] range","detail":"{range_begin:/registry/minions/addons-680529; range_end:; response_count:1; response_revision:473; }","duration":"110.200127ms","start":"2024-12-11T23:54:25.243851Z","end":"2024-12-11T23:54:25.354051Z","steps":["trace[140567419] 'agreement among raft nodes before linearized reading'  (duration: 103.233668ms)"],"step_count":1}
	
	
	==> kernel <==
	 00:01:24 up  1:43,  0 users,  load average: 0.84, 1.54, 2.37
	Linux addons-680529 5.15.0-1072-aws #78~20.04.1-Ubuntu SMP Wed Oct 9 15:29:54 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [d03a9536de2615711e6aafce6d843bca16d7fde5a0440f9e84a55e33b7f3e2b5] <==
	I1211 23:59:16.426403       1 main.go:301] handling current node
	I1211 23:59:26.423708       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1211 23:59:26.423739       1 main.go:301] handling current node
	I1211 23:59:36.430213       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1211 23:59:36.430247       1 main.go:301] handling current node
	I1211 23:59:46.429802       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1211 23:59:46.429832       1 main.go:301] handling current node
	I1211 23:59:56.426564       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1211 23:59:56.426602       1 main.go:301] handling current node
	I1212 00:00:06.423522       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:00:06.423639       1 main.go:301] handling current node
	I1212 00:00:16.423030       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:00:16.423143       1 main.go:301] handling current node
	I1212 00:00:26.423798       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:00:26.423831       1 main.go:301] handling current node
	I1212 00:00:36.429452       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:00:36.429485       1 main.go:301] handling current node
	I1212 00:00:46.423275       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:00:46.425262       1 main.go:301] handling current node
	I1212 00:00:56.427755       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:00:56.427874       1 main.go:301] handling current node
	I1212 00:01:06.423115       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:01:06.423151       1 main.go:301] handling current node
	I1212 00:01:16.430303       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:01:16.430334       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a7c5aafc3840bb42186d9aea30a256badd54fa94e27ee9566d8805b029ea85a9] <==
	 > logger="UnhandledError"
	E1211 23:56:57.739139       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.204.160:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.204.160:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.204.160:443: connect: connection refused" logger="UnhandledError"
	I1211 23:56:57.809537       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1211 23:57:42.296301       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:49024: use of closed network connection
	E1211 23:57:42.717363       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:49058: use of closed network connection
	I1211 23:57:52.072376       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.107.224.17"}
	I1211 23:58:27.582233       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1211 23:58:42.086534       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1211 23:58:42.086697       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1211 23:58:42.109200       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1211 23:58:42.109262       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1211 23:58:42.133832       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1211 23:58:42.134807       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1211 23:58:42.245941       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1211 23:58:42.246076       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1211 23:58:42.276269       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1211 23:58:42.276705       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1211 23:58:43.245529       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1211 23:58:43.277146       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1211 23:58:43.320032       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1211 23:58:55.793598       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1211 23:58:56.830906       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1211 23:59:01.356984       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1211 23:59:01.660892       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.100.191.192"}
	I1212 00:01:22.380411       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.101.146.50"}
	
	
	==> kube-controller-manager [b109b488cf6dcfed2ad5d0f9c0b38e27f828da9b0aee40471d360272905098d7] <==
	E1211 23:59:49.214343       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1211 23:59:52.057922       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1211 23:59:52.057976       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1212 00:00:04.787259       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1212 00:00:04.787306       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1212 00:00:09.695327       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1212 00:00:09.695370       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1212 00:00:23.515708       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1212 00:00:23.515751       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1212 00:00:43.550417       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1212 00:00:43.550465       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1212 00:00:43.858123       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1212 00:00:43.858205       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1212 00:00:49.400910       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1212 00:00:49.401028       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1212 00:00:59.099010       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1212 00:00:59.099065       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1212 00:01:14.239581       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1212 00:01:14.239628       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1212 00:01:21.446762       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1212 00:01:21.446807       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1212 00:01:22.130462       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="28.383063ms"
	I1212 00:01:22.153090       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="22.513957ms"
	I1212 00:01:22.153242       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="40.154µs"
	I1212 00:01:22.160916       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="25.115µs"
	
	
	==> kube-proxy [f5b9aebd301a382d4a983705cfb7c3bf32fe23eaa2d4a2e7438cad2251148c73] <==
	I1211 23:54:24.089049       1 server_linux.go:66] "Using iptables proxy"
	I1211 23:54:25.397625       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E1211 23:54:25.495735       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1211 23:54:26.266322       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1211 23:54:26.272807       1 server_linux.go:169] "Using iptables Proxier"
	I1211 23:54:26.466871       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1211 23:54:26.470439       1 server.go:483] "Version info" version="v1.31.2"
	I1211 23:54:26.470673       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1211 23:54:26.472641       1 config.go:199] "Starting service config controller"
	I1211 23:54:26.472724       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1211 23:54:26.472767       1 config.go:105] "Starting endpoint slice config controller"
	I1211 23:54:26.472813       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1211 23:54:26.473239       1 config.go:328] "Starting node config controller"
	I1211 23:54:26.473296       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1211 23:54:26.575402       1 shared_informer.go:320] Caches are synced for node config
	I1211 23:54:26.611677       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1211 23:54:26.625457       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [a8aa020a72093940be9978a746548762becd42c86b1a2fbbcec72c604d118bb8] <==
	W1211 23:54:12.000474       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1211 23:54:12.002891       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1211 23:54:12.000682       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1211 23:54:12.003109       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1211 23:54:12.002522       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1211 23:54:12.003235       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1211 23:54:12.841854       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1211 23:54:12.841964       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1211 23:54:12.843119       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1211 23:54:12.843211       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1211 23:54:12.856909       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1211 23:54:12.857037       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1211 23:54:13.107467       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1211 23:54:13.107623       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1211 23:54:13.115151       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1211 23:54:13.115200       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1211 23:54:13.194793       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1211 23:54:13.194840       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1211 23:54:13.212461       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1211 23:54:13.212598       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1211 23:54:13.229088       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1211 23:54:13.229232       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1211 23:54:13.243534       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1211 23:54:13.243578       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1211 23:54:16.179736       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 11 23:59:44 addons-680529 kubelet[1516]: E1211 23:59:44.669328    1516 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961584668942027,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:577493,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 11 23:59:46 addons-680529 kubelet[1516]: I1211 23:59:46.536170    1516 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Dec 11 23:59:54 addons-680529 kubelet[1516]: E1211 23:59:54.672269    1516 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961594672015953,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:577493,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 11 23:59:54 addons-680529 kubelet[1516]: E1211 23:59:54.672309    1516 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961594672015953,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:577493,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:00:04 addons-680529 kubelet[1516]: E1212 00:00:04.674900    1516 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961604674642718,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:577493,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:00:04 addons-680529 kubelet[1516]: E1212 00:00:04.674946    1516 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961604674642718,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:577493,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:00:14 addons-680529 kubelet[1516]: E1212 00:00:14.676821    1516 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961614676614689,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:577493,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:00:14 addons-680529 kubelet[1516]: E1212 00:00:14.676861    1516 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961614676614689,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:577493,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:00:24 addons-680529 kubelet[1516]: E1212 00:00:24.679002    1516 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961624678773767,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:577493,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:00:24 addons-680529 kubelet[1516]: E1212 00:00:24.679042    1516 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961624678773767,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:577493,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:00:33 addons-680529 kubelet[1516]: I1212 00:00:33.536593    1516 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-pcmmw" secret="" err="secret \"gcp-auth\" not found"
	Dec 12 00:00:34 addons-680529 kubelet[1516]: E1212 00:00:34.682091    1516 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961634681876386,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:577493,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:00:34 addons-680529 kubelet[1516]: E1212 00:00:34.682162    1516 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961634681876386,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:577493,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:00:44 addons-680529 kubelet[1516]: E1212 00:00:44.686334    1516 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961644685586290,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:577493,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:00:44 addons-680529 kubelet[1516]: E1212 00:00:44.686372    1516 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961644685586290,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:577493,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:00:46 addons-680529 kubelet[1516]: I1212 00:00:46.536307    1516 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/cloud-spanner-emulator-dc5db94f4-s2gl8" secret="" err="secret \"gcp-auth\" not found"
	Dec 12 00:00:54 addons-680529 kubelet[1516]: E1212 00:00:54.688257    1516 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961654688042213,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:577493,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:00:54 addons-680529 kubelet[1516]: E1212 00:00:54.688293    1516 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961654688042213,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:577493,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:01:04 addons-680529 kubelet[1516]: E1212 00:01:04.690242    1516 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961664690009698,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:577493,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:01:04 addons-680529 kubelet[1516]: E1212 00:01:04.690278    1516 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961664690009698,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:577493,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:01:10 addons-680529 kubelet[1516]: I1212 00:01:10.536862    1516 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Dec 12 00:01:14 addons-680529 kubelet[1516]: E1212 00:01:14.692163    1516 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961674691982848,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:577493,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:01:14 addons-680529 kubelet[1516]: E1212 00:01:14.692196    1516 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961674691982848,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:577493,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:01:22 addons-680529 kubelet[1516]: I1212 00:01:22.139233    1516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx" podStartSLOduration=138.801971964 podStartE2EDuration="2m21.139215534s" podCreationTimestamp="2024-12-11 23:59:01 +0000 UTC" firstStartedPulling="2024-12-11 23:59:01.942766937 +0000 UTC m=+287.544181697" lastFinishedPulling="2024-12-11 23:59:04.280010499 +0000 UTC m=+289.881425267" observedRunningTime="2024-12-11 23:59:05.268021725 +0000 UTC m=+290.869436493" watchObservedRunningTime="2024-12-12 00:01:22.139215534 +0000 UTC m=+427.740630294"
	Dec 12 00:01:22 addons-680529 kubelet[1516]: I1212 00:01:22.256910    1516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tnvx\" (UniqueName: \"kubernetes.io/projected/55fc2746-7c6d-4d31-9b2e-59ad76de89c3-kube-api-access-8tnvx\") pod \"hello-world-app-55bf9c44b4-jqgpm\" (UID: \"55fc2746-7c6d-4d31-9b2e-59ad76de89c3\") " pod="default/hello-world-app-55bf9c44b4-jqgpm"
	
	
	==> storage-provisioner [7490f08ddea6e00aacfd56cb8ac004428cc45925332eba9a484eff6d8c5f51ae] <==
	I1211 23:55:08.091934       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1211 23:55:08.104292       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1211 23:55:08.104409       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1211 23:55:08.114587       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1211 23:55:08.114852       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-680529_667ac295-5fe0-4dd5-9b7a-57923818fe01!
	I1211 23:55:08.115033       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ecd09072-3fb9-47fc-b701-e486ef4c06c6", APIVersion:"v1", ResourceVersion:"931", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-680529_667ac295-5fe0-4dd5-9b7a-57923818fe01 became leader
	I1211 23:55:08.215363       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-680529_667ac295-5fe0-4dd5-9b7a-57923818fe01!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-680529 -n addons-680529
helpers_test.go:261: (dbg) Run:  kubectl --context addons-680529 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-2wwm9 ingress-nginx-admission-patch-8jgnc
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-680529 describe pod ingress-nginx-admission-create-2wwm9 ingress-nginx-admission-patch-8jgnc
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-680529 describe pod ingress-nginx-admission-create-2wwm9 ingress-nginx-admission-patch-8jgnc: exit status 1 (86.465592ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-2wwm9" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-8jgnc" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-680529 describe pod ingress-nginx-admission-create-2wwm9 ingress-nginx-admission-patch-8jgnc: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-680529 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-680529 addons disable ingress-dns --alsologtostderr -v=1: (1.227877117s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-680529 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-680529 addons disable ingress --alsologtostderr -v=1: (7.734354016s)
--- FAIL: TestAddons/parallel/Ingress (153.33s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (367.94s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 7.46161ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-c68dp" [09bd89d6-eb8c-4252-ae07-4d3b5b855169] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.005001677s
addons_test.go:402: (dbg) Run:  kubectl --context addons-680529 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-680529 top pods -n kube-system: exit status 1 (82.621641ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-ltfkm, age: 3m55.846356207s

                                                
                                                
** /stderr **
I1211 23:58:14.850441  272599 retry.go:31] will retry after 4.418057216s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-680529 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-680529 top pods -n kube-system: exit status 1 (179.74123ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-ltfkm, age: 4m0.446548184s

                                                
                                                
** /stderr **
I1211 23:58:19.449335  272599 retry.go:31] will retry after 5.71556852s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-680529 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-680529 top pods -n kube-system: exit status 1 (88.392243ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-ltfkm, age: 4m6.250709025s

                                                
                                                
** /stderr **
I1211 23:58:25.253664  272599 retry.go:31] will retry after 9.303894783s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-680529 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-680529 top pods -n kube-system: exit status 1 (100.860628ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-ltfkm, age: 4m15.65536254s

                                                
                                                
** /stderr **
I1211 23:58:34.658748  272599 retry.go:31] will retry after 12.506999425s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-680529 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-680529 top pods -n kube-system: exit status 1 (86.309934ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-ltfkm, age: 4m28.249103349s

                                                
                                                
** /stderr **
I1211 23:58:47.252744  272599 retry.go:31] will retry after 20.767077804s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-680529 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-680529 top pods -n kube-system: exit status 1 (102.915413ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-ltfkm, age: 4m49.119777712s

                                                
                                                
** /stderr **
I1211 23:59:08.123151  272599 retry.go:31] will retry after 16.650385094s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-680529 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-680529 top pods -n kube-system: exit status 1 (80.749477ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-ltfkm, age: 5m5.852039295s

                                                
                                                
** /stderr **
I1211 23:59:24.855212  272599 retry.go:31] will retry after 22.965305586s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-680529 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-680529 top pods -n kube-system: exit status 1 (88.571233ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-ltfkm, age: 5m28.910604662s

                                                
                                                
** /stderr **
I1211 23:59:47.913500  272599 retry.go:31] will retry after 47.35101243s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-680529 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-680529 top pods -n kube-system: exit status 1 (85.542231ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-ltfkm, age: 6m16.349352917s

                                                
                                                
** /stderr **
I1212 00:00:35.352305  272599 retry.go:31] will retry after 53.652267439s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-680529 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-680529 top pods -n kube-system: exit status 1 (79.248991ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-ltfkm, age: 7m10.082114945s

                                                
                                                
** /stderr **
I1212 00:01:29.085052  272599 retry.go:31] will retry after 1m5.682964105s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-680529 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-680529 top pods -n kube-system: exit status 1 (79.240626ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-ltfkm, age: 8m15.845513829s

                                                
                                                
** /stderr **
I1212 00:02:34.848824  272599 retry.go:31] will retry after 52.220763327s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-680529 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-680529 top pods -n kube-system: exit status 1 (90.671416ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-ltfkm, age: 9m8.157292157s

                                                
                                                
** /stderr **
I1212 00:03:27.160897  272599 retry.go:31] will retry after 46.3651579s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-680529 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-680529 top pods -n kube-system: exit status 1 (79.503987ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-ltfkm, age: 9m54.603654442s

                                                
                                                
** /stderr **
addons_test.go:416: failed checking metric server: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/MetricsServer]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-680529
helpers_test.go:235: (dbg) docker inspect addons-680529:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1574e2ba69a221fa0bfdd7385787c117f5f9c4f65c607102e868f146d42ac6e2",
	        "Created": "2024-12-11T23:53:50.308547561Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 273855,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-12-11T23:53:50.457736916Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:02e8be8b1127faa30f09fff745d2a6d385248178d204468bf667a69a71dbf447",
	        "ResolvConfPath": "/var/lib/docker/containers/1574e2ba69a221fa0bfdd7385787c117f5f9c4f65c607102e868f146d42ac6e2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1574e2ba69a221fa0bfdd7385787c117f5f9c4f65c607102e868f146d42ac6e2/hostname",
	        "HostsPath": "/var/lib/docker/containers/1574e2ba69a221fa0bfdd7385787c117f5f9c4f65c607102e868f146d42ac6e2/hosts",
	        "LogPath": "/var/lib/docker/containers/1574e2ba69a221fa0bfdd7385787c117f5f9c4f65c607102e868f146d42ac6e2/1574e2ba69a221fa0bfdd7385787c117f5f9c4f65c607102e868f146d42ac6e2-json.log",
	        "Name": "/addons-680529",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-680529:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-680529",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a84df4a719abdcecfc7f87bac585e4d175f7e4e2636079a8f9517afb944c65a0-init/diff:/var/lib/docker/overlay2/cae28b97ef808ae95cc2fc3d05edfc376b87c790784199a6aea276c80f286d94/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a84df4a719abdcecfc7f87bac585e4d175f7e4e2636079a8f9517afb944c65a0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a84df4a719abdcecfc7f87bac585e4d175f7e4e2636079a8f9517afb944c65a0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a84df4a719abdcecfc7f87bac585e4d175f7e4e2636079a8f9517afb944c65a0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-680529",
	                "Source": "/var/lib/docker/volumes/addons-680529/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-680529",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-680529",
	                "name.minikube.sigs.k8s.io": "addons-680529",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "56251203e7d0287d933a0bbfa4ec2bb99d01ae9e5c606af8a1ed6fc050471037",
	            "SandboxKey": "/var/run/docker/netns/56251203e7d0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-680529": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "ddce59baa39155a23e7cd0bf9b9b67d093c265faf900566172f0f882de75a5c8",
	                    "EndpointID": "f4001ddeb82e66b6d7a075f2eb10cbff954e913e3c37fc7a40eaab0f61aa9735",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-680529",
	                        "1574e2ba69a2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-680529 -n addons-680529
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-680529 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-680529 logs -n 25: (1.43748661s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | --download-only -p                                                                          | download-docker-474606 | jenkins | v1.34.0 | 11 Dec 24 23:53 UTC |                     |
	|         | download-docker-474606                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-474606                                                                   | download-docker-474606 | jenkins | v1.34.0 | 11 Dec 24 23:53 UTC | 11 Dec 24 23:53 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-562366   | jenkins | v1.34.0 | 11 Dec 24 23:53 UTC |                     |
	|         | binary-mirror-562366                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:39867                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-562366                                                                     | binary-mirror-562366   | jenkins | v1.34.0 | 11 Dec 24 23:53 UTC | 11 Dec 24 23:53 UTC |
	| addons  | enable dashboard -p                                                                         | addons-680529          | jenkins | v1.34.0 | 11 Dec 24 23:53 UTC |                     |
	|         | addons-680529                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-680529          | jenkins | v1.34.0 | 11 Dec 24 23:53 UTC |                     |
	|         | addons-680529                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-680529 --wait=true                                                                | addons-680529          | jenkins | v1.34.0 | 11 Dec 24 23:53 UTC | 11 Dec 24 23:57 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	| addons  | addons-680529 addons disable                                                                | addons-680529          | jenkins | v1.34.0 | 11 Dec 24 23:57 UTC | 11 Dec 24 23:57 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-680529 addons disable                                                                | addons-680529          | jenkins | v1.34.0 | 11 Dec 24 23:57 UTC | 11 Dec 24 23:57 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-680529          | jenkins | v1.34.0 | 11 Dec 24 23:57 UTC | 11 Dec 24 23:57 UTC |
	|         | -p addons-680529                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-680529 addons disable                                                                | addons-680529          | jenkins | v1.34.0 | 11 Dec 24 23:58 UTC | 11 Dec 24 23:58 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ip      | addons-680529 ip                                                                            | addons-680529          | jenkins | v1.34.0 | 11 Dec 24 23:58 UTC | 11 Dec 24 23:58 UTC |
	| addons  | addons-680529 addons disable                                                                | addons-680529          | jenkins | v1.34.0 | 11 Dec 24 23:58 UTC | 11 Dec 24 23:58 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-680529 addons                                                                        | addons-680529          | jenkins | v1.34.0 | 11 Dec 24 23:58 UTC | 11 Dec 24 23:58 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-680529 addons                                                                        | addons-680529          | jenkins | v1.34.0 | 11 Dec 24 23:58 UTC | 11 Dec 24 23:58 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-680529 addons                                                                        | addons-680529          | jenkins | v1.34.0 | 11 Dec 24 23:58 UTC | 11 Dec 24 23:59 UTC |
	|         | disable inspektor-gadget                                                                    |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-680529 ssh curl -s                                                                   | addons-680529          | jenkins | v1.34.0 | 11 Dec 24 23:59 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-680529 ip                                                                            | addons-680529          | jenkins | v1.34.0 | 12 Dec 24 00:01 UTC | 12 Dec 24 00:01 UTC |
	| addons  | addons-680529 addons disable                                                                | addons-680529          | jenkins | v1.34.0 | 12 Dec 24 00:01 UTC | 12 Dec 24 00:01 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-680529 addons disable                                                                | addons-680529          | jenkins | v1.34.0 | 12 Dec 24 00:01 UTC | 12 Dec 24 00:01 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-680529 addons                                                                        | addons-680529          | jenkins | v1.34.0 | 12 Dec 24 00:01 UTC | 12 Dec 24 00:01 UTC |
	|         | disable nvidia-device-plugin                                                                |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-680529 addons disable                                                                | addons-680529          | jenkins | v1.34.0 | 12 Dec 24 00:01 UTC | 12 Dec 24 00:01 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| ssh     | addons-680529 ssh cat                                                                       | addons-680529          | jenkins | v1.34.0 | 12 Dec 24 00:02 UTC | 12 Dec 24 00:02 UTC |
	|         | /opt/local-path-provisioner/pvc-179f3b88-d822-4d7e-95c6-fe03050f1eae_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-680529 addons disable                                                                | addons-680529          | jenkins | v1.34.0 | 12 Dec 24 00:02 UTC | 12 Dec 24 00:02 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-680529 addons                                                                        | addons-680529          | jenkins | v1.34.0 | 12 Dec 24 00:02 UTC | 12 Dec 24 00:02 UTC |
	|         | disable cloud-spanner                                                                       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/11 23:53:26
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.23.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1211 23:53:26.140694  273363 out.go:345] Setting OutFile to fd 1 ...
	I1211 23:53:26.140871  273363 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 23:53:26.140881  273363 out.go:358] Setting ErrFile to fd 2...
	I1211 23:53:26.140887  273363 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 23:53:26.141138  273363 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20083-267093/.minikube/bin
	I1211 23:53:26.141624  273363 out.go:352] Setting JSON to false
	I1211 23:53:26.142519  273363 start.go:129] hostinfo: {"hostname":"ip-172-31-24-2","uptime":5748,"bootTime":1733955459,"procs":162,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1072-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1211 23:53:26.142603  273363 start.go:139] virtualization:  
	I1211 23:53:26.144642  273363 out.go:177] * [addons-680529] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1211 23:53:26.145939  273363 out.go:177]   - MINIKUBE_LOCATION=20083
	I1211 23:53:26.146010  273363 notify.go:220] Checking for updates...
	I1211 23:53:26.148409  273363 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 23:53:26.149818  273363 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20083-267093/kubeconfig
	I1211 23:53:26.151440  273363 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20083-267093/.minikube
	I1211 23:53:26.152528  273363 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1211 23:53:26.153714  273363 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1211 23:53:26.155187  273363 driver.go:394] Setting default libvirt URI to qemu:///system
	I1211 23:53:26.176944  273363 docker.go:123] docker version: linux-27.4.0:Docker Engine - Community
	I1211 23:53:26.177079  273363 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1211 23:53:26.243886  273363 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-12-11 23:53:26.235020436 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge-n
f-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
	I1211 23:53:26.243994  273363 docker.go:318] overlay module found
	I1211 23:53:26.245328  273363 out.go:177] * Using the docker driver based on user configuration
	I1211 23:53:26.246391  273363 start.go:297] selected driver: docker
	I1211 23:53:26.246407  273363 start.go:901] validating driver "docker" against <nil>
	I1211 23:53:26.246420  273363 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1211 23:53:26.247127  273363 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1211 23:53:26.297176  273363 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-12-11 23:53:26.287858692 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge-n
f-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
	I1211 23:53:26.297406  273363 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1211 23:53:26.297635  273363 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1211 23:53:26.299324  273363 out.go:177] * Using Docker driver with root privileges
	I1211 23:53:26.300687  273363 cni.go:84] Creating CNI manager for ""
	I1211 23:53:26.300749  273363 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1211 23:53:26.300766  273363 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1211 23:53:26.300855  273363 start.go:340] cluster config:
	{Name:addons-680529 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-680529 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 23:53:26.302502  273363 out.go:177] * Starting "addons-680529" primary control-plane node in "addons-680529" cluster
	I1211 23:53:26.303640  273363 cache.go:121] Beginning downloading kic base image for docker with crio
	I1211 23:53:26.305136  273363 out.go:177] * Pulling base image v0.0.45-1733912881-20083 ...
	I1211 23:53:26.306304  273363 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1211 23:53:26.306361  273363 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20083-267093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-arm64.tar.lz4
	I1211 23:53:26.306373  273363 cache.go:56] Caching tarball of preloaded images
	I1211 23:53:26.306392  273363 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 in local docker daemon
	I1211 23:53:26.306466  273363 preload.go:172] Found /home/jenkins/minikube-integration/20083-267093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1211 23:53:26.306477  273363 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1211 23:53:26.306827  273363 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/config.json ...
	I1211 23:53:26.306857  273363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/config.json: {Name:mk469b90b54323209236f5351ccad5d417857cac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:53:26.321839  273363 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 to local cache
	I1211 23:53:26.321947  273363 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 in local cache directory
	I1211 23:53:26.321978  273363 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 in local cache directory, skipping pull
	I1211 23:53:26.321994  273363 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 exists in cache, skipping pull
	I1211 23:53:26.322003  273363 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 as a tarball
	I1211 23:53:26.322018  273363 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 from local cache
	I1211 23:53:43.522907  273363 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 from cached tarball
	I1211 23:53:43.522944  273363 cache.go:194] Successfully downloaded all kic artifacts
	I1211 23:53:43.522976  273363 start.go:360] acquireMachinesLock for addons-680529: {Name:mka66168fe56cbfe9ea230a9ab15a4bcc0bf82b3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 23:53:43.523101  273363 start.go:364] duration metric: took 107.633µs to acquireMachinesLock for "addons-680529"
	I1211 23:53:43.523128  273363 start.go:93] Provisioning new machine with config: &{Name:addons-680529 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-680529 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1211 23:53:43.523197  273363 start.go:125] createHost starting for "" (driver="docker")
	I1211 23:53:43.524914  273363 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1211 23:53:43.525167  273363 start.go:159] libmachine.API.Create for "addons-680529" (driver="docker")
	I1211 23:53:43.525209  273363 client.go:168] LocalClient.Create starting
	I1211 23:53:43.525332  273363 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20083-267093/.minikube/certs/ca.pem
	I1211 23:53:43.804383  273363 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20083-267093/.minikube/certs/cert.pem
	I1211 23:53:43.948495  273363 cli_runner.go:164] Run: docker network inspect addons-680529 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1211 23:53:43.963207  273363 cli_runner.go:211] docker network inspect addons-680529 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1211 23:53:43.963316  273363 network_create.go:284] running [docker network inspect addons-680529] to gather additional debugging logs...
	I1211 23:53:43.963338  273363 cli_runner.go:164] Run: docker network inspect addons-680529
	W1211 23:53:43.976941  273363 cli_runner.go:211] docker network inspect addons-680529 returned with exit code 1
	I1211 23:53:43.976972  273363 network_create.go:287] error running [docker network inspect addons-680529]: docker network inspect addons-680529: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-680529 not found
	I1211 23:53:43.976985  273363 network_create.go:289] output of [docker network inspect addons-680529]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-680529 not found
	
	** /stderr **
	I1211 23:53:43.977072  273363 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1211 23:53:43.993063  273363 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400198b000}
	I1211 23:53:43.993106  273363 network_create.go:124] attempt to create docker network addons-680529 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1211 23:53:43.993159  273363 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-680529 addons-680529
	I1211 23:53:44.066002  273363 network_create.go:108] docker network addons-680529 192.168.49.0/24 created
	I1211 23:53:44.066040  273363 kic.go:121] calculated static IP "192.168.49.2" for the "addons-680529" container
	I1211 23:53:44.066177  273363 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1211 23:53:44.082160  273363 cli_runner.go:164] Run: docker volume create addons-680529 --label name.minikube.sigs.k8s.io=addons-680529 --label created_by.minikube.sigs.k8s.io=true
	I1211 23:53:44.098843  273363 oci.go:103] Successfully created a docker volume addons-680529
	I1211 23:53:44.098942  273363 cli_runner.go:164] Run: docker run --rm --name addons-680529-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-680529 --entrypoint /usr/bin/test -v addons-680529:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 -d /var/lib
	I1211 23:53:46.176639  273363 cli_runner.go:217] Completed: docker run --rm --name addons-680529-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-680529 --entrypoint /usr/bin/test -v addons-680529:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 -d /var/lib: (2.077639218s)
	I1211 23:53:46.176672  273363 oci.go:107] Successfully prepared a docker volume addons-680529
	I1211 23:53:46.176710  273363 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1211 23:53:46.176735  273363 kic.go:194] Starting extracting preloaded images to volume ...
	I1211 23:53:46.176810  273363 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20083-267093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-680529:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 -I lz4 -xf /preloaded.tar -C /extractDir
	I1211 23:53:50.231601  273363 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20083-267093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-680529:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 -I lz4 -xf /preloaded.tar -C /extractDir: (4.054749684s)
	I1211 23:53:50.231633  273363 kic.go:203] duration metric: took 4.054894215s to extract preloaded images to volume ...
	W1211 23:53:50.231787  273363 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1211 23:53:50.231896  273363 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1211 23:53:50.293543  273363 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-680529 --name addons-680529 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-680529 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-680529 --network addons-680529 --ip 192.168.49.2 --volume addons-680529:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2
	I1211 23:53:50.638889  273363 cli_runner.go:164] Run: docker container inspect addons-680529 --format={{.State.Running}}
	I1211 23:53:50.661869  273363 cli_runner.go:164] Run: docker container inspect addons-680529 --format={{.State.Status}}
	I1211 23:53:50.687003  273363 cli_runner.go:164] Run: docker exec addons-680529 stat /var/lib/dpkg/alternatives/iptables
	I1211 23:53:50.738132  273363 oci.go:144] the created container "addons-680529" has a running status.
	I1211 23:53:50.738243  273363 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20083-267093/.minikube/machines/addons-680529/id_rsa...
	I1211 23:53:51.681528  273363 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20083-267093/.minikube/machines/addons-680529/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1211 23:53:51.707286  273363 cli_runner.go:164] Run: docker container inspect addons-680529 --format={{.State.Status}}
	I1211 23:53:51.730069  273363 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1211 23:53:51.730097  273363 kic_runner.go:114] Args: [docker exec --privileged addons-680529 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1211 23:53:51.777999  273363 cli_runner.go:164] Run: docker container inspect addons-680529 --format={{.State.Status}}
	I1211 23:53:51.798969  273363 machine.go:93] provisionDockerMachine start ...
	I1211 23:53:51.799060  273363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-680529
	I1211 23:53:51.820747  273363 main.go:141] libmachine: Using SSH client type: native
	I1211 23:53:51.821006  273363 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x416340] 0x418b80 <nil>  [] 0s} 127.0.0.1 33085 <nil> <nil>}
	I1211 23:53:51.821015  273363 main.go:141] libmachine: About to run SSH command:
	hostname
	I1211 23:53:51.953788  273363 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-680529
	
	I1211 23:53:51.953814  273363 ubuntu.go:169] provisioning hostname "addons-680529"
	I1211 23:53:51.953891  273363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-680529
	I1211 23:53:51.971604  273363 main.go:141] libmachine: Using SSH client type: native
	I1211 23:53:51.971862  273363 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x416340] 0x418b80 <nil>  [] 0s} 127.0.0.1 33085 <nil> <nil>}
	I1211 23:53:51.971881  273363 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-680529 && echo "addons-680529" | sudo tee /etc/hostname
	I1211 23:53:52.114005  273363 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-680529
	
	I1211 23:53:52.114206  273363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-680529
	I1211 23:53:52.132000  273363 main.go:141] libmachine: Using SSH client type: native
	I1211 23:53:52.132246  273363 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x416340] 0x418b80 <nil>  [] 0s} 127.0.0.1 33085 <nil> <nil>}
	I1211 23:53:52.132283  273363 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-680529' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-680529/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-680529' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1211 23:53:52.266078  273363 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1211 23:53:52.266114  273363 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20083-267093/.minikube CaCertPath:/home/jenkins/minikube-integration/20083-267093/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20083-267093/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20083-267093/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20083-267093/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20083-267093/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20083-267093/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20083-267093/.minikube}
	I1211 23:53:52.266160  273363 ubuntu.go:177] setting up certificates
	I1211 23:53:52.266173  273363 provision.go:84] configureAuth start
	I1211 23:53:52.266236  273363 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-680529
	I1211 23:53:52.282671  273363 provision.go:143] copyHostCerts
	I1211 23:53:52.282749  273363 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-267093/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20083-267093/.minikube/key.pem (1679 bytes)
	I1211 23:53:52.282866  273363 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-267093/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20083-267093/.minikube/ca.pem (1082 bytes)
	I1211 23:53:52.282929  273363 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-267093/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20083-267093/.minikube/cert.pem (1123 bytes)
	I1211 23:53:52.282978  273363 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20083-267093/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20083-267093/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20083-267093/.minikube/certs/ca-key.pem org=jenkins.addons-680529 san=[127.0.0.1 192.168.49.2 addons-680529 localhost minikube]
	I1211 23:53:52.513838  273363 provision.go:177] copyRemoteCerts
	I1211 23:53:52.513921  273363 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1211 23:53:52.513966  273363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-680529
	I1211 23:53:52.533714  273363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/20083-267093/.minikube/machines/addons-680529/id_rsa Username:docker}
	I1211 23:53:52.627497  273363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-267093/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1211 23:53:52.653369  273363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-267093/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1211 23:53:52.677218  273363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-267093/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1211 23:53:52.701776  273363 provision.go:87] duration metric: took 435.586406ms to configureAuth
	I1211 23:53:52.701853  273363 ubuntu.go:193] setting minikube options for container-runtime
	I1211 23:53:52.702065  273363 config.go:182] Loaded profile config "addons-680529": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1211 23:53:52.702211  273363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-680529
	I1211 23:53:52.719218  273363 main.go:141] libmachine: Using SSH client type: native
	I1211 23:53:52.719465  273363 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x416340] 0x418b80 <nil>  [] 0s} 127.0.0.1 33085 <nil> <nil>}
	I1211 23:53:52.719487  273363 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1211 23:53:52.951461  273363 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1211 23:53:52.951525  273363 machine.go:96] duration metric: took 1.152535409s to provisionDockerMachine
	I1211 23:53:52.951553  273363 client.go:171] duration metric: took 9.426336636s to LocalClient.Create
	I1211 23:53:52.951588  273363 start.go:167] duration metric: took 9.426422083s to libmachine.API.Create "addons-680529"
	I1211 23:53:52.951616  273363 start.go:293] postStartSetup for "addons-680529" (driver="docker")
	I1211 23:53:52.951657  273363 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1211 23:53:52.951787  273363 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1211 23:53:52.951861  273363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-680529
	I1211 23:53:52.968961  273363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/20083-267093/.minikube/machines/addons-680529/id_rsa Username:docker}
	I1211 23:53:53.063657  273363 ssh_runner.go:195] Run: cat /etc/os-release
	I1211 23:53:53.067080  273363 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1211 23:53:53.067163  273363 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1211 23:53:53.067199  273363 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1211 23:53:53.067213  273363 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1211 23:53:53.067226  273363 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-267093/.minikube/addons for local assets ...
	I1211 23:53:53.067294  273363 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-267093/.minikube/files for local assets ...
	I1211 23:53:53.067320  273363 start.go:296] duration metric: took 115.670105ms for postStartSetup
	I1211 23:53:53.067633  273363 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-680529
	I1211 23:53:53.086385  273363 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/config.json ...
	I1211 23:53:53.086779  273363 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1211 23:53:53.086835  273363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-680529
	I1211 23:53:53.103278  273363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/20083-267093/.minikube/machines/addons-680529/id_rsa Username:docker}
	I1211 23:53:53.195077  273363 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1211 23:53:53.199584  273363 start.go:128] duration metric: took 9.676367262s to createHost
	I1211 23:53:53.199611  273363 start.go:83] releasing machines lock for "addons-680529", held for 9.676499617s
	I1211 23:53:53.199680  273363 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-680529
	I1211 23:53:53.216859  273363 ssh_runner.go:195] Run: cat /version.json
	I1211 23:53:53.216930  273363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-680529
	I1211 23:53:53.217218  273363 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1211 23:53:53.217288  273363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-680529
	I1211 23:53:53.236442  273363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/20083-267093/.minikube/machines/addons-680529/id_rsa Username:docker}
	I1211 23:53:53.239362  273363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/20083-267093/.minikube/machines/addons-680529/id_rsa Username:docker}
	I1211 23:53:53.461052  273363 ssh_runner.go:195] Run: systemctl --version
	I1211 23:53:53.465350  273363 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1211 23:53:53.605737  273363 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1211 23:53:53.610034  273363 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1211 23:53:53.631015  273363 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1211 23:53:53.631097  273363 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1211 23:53:53.662332  273363 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1211 23:53:53.662400  273363 start.go:495] detecting cgroup driver to use...
	I1211 23:53:53.662450  273363 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1211 23:53:53.662525  273363 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1211 23:53:53.679214  273363 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1211 23:53:53.690683  273363 docker.go:217] disabling cri-docker service (if available) ...
	I1211 23:53:53.690752  273363 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1211 23:53:53.705298  273363 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1211 23:53:53.720122  273363 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1211 23:53:53.811749  273363 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1211 23:53:53.903591  273363 docker.go:233] disabling docker service ...
	I1211 23:53:53.903661  273363 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1211 23:53:53.923657  273363 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1211 23:53:53.935510  273363 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1211 23:53:54.028919  273363 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1211 23:53:54.124972  273363 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1211 23:53:54.135898  273363 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1211 23:53:54.151607  273363 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1211 23:53:54.151720  273363 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:53:54.161361  273363 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1211 23:53:54.161451  273363 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:53:54.171643  273363 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:53:54.181485  273363 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:53:54.191090  273363 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1211 23:53:54.199960  273363 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:53:54.209951  273363 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:53:54.225659  273363 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:53:54.235010  273363 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1211 23:53:54.243799  273363 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1211 23:53:54.252529  273363 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 23:53:54.329703  273363 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1211 23:53:54.433564  273363 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1211 23:53:54.433725  273363 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1211 23:53:54.438072  273363 start.go:563] Will wait 60s for crictl version
	I1211 23:53:54.438216  273363 ssh_runner.go:195] Run: which crictl
	I1211 23:53:54.441667  273363 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1211 23:53:54.478611  273363 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1211 23:53:54.478726  273363 ssh_runner.go:195] Run: crio --version
	I1211 23:53:54.515949  273363 ssh_runner.go:195] Run: crio --version
	I1211 23:53:54.557876  273363 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.24.6 ...
	I1211 23:53:54.560375  273363 cli_runner.go:164] Run: docker network inspect addons-680529 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1211 23:53:54.580924  273363 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1211 23:53:54.584432  273363 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1211 23:53:54.595231  273363 kubeadm.go:883] updating cluster {Name:addons-680529 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-680529 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1211 23:53:54.595357  273363 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1211 23:53:54.595418  273363 ssh_runner.go:195] Run: sudo crictl images --output json
	I1211 23:53:54.677508  273363 crio.go:514] all images are preloaded for cri-o runtime.
	I1211 23:53:54.677529  273363 crio.go:433] Images already preloaded, skipping extraction
	I1211 23:53:54.677585  273363 ssh_runner.go:195] Run: sudo crictl images --output json
	I1211 23:53:54.712881  273363 crio.go:514] all images are preloaded for cri-o runtime.
	I1211 23:53:54.712906  273363 cache_images.go:84] Images are preloaded, skipping loading
	I1211 23:53:54.712915  273363 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.2 crio true true} ...
	I1211 23:53:54.713011  273363 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-680529 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:addons-680529 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1211 23:53:54.713097  273363 ssh_runner.go:195] Run: crio config
	I1211 23:53:54.769241  273363 cni.go:84] Creating CNI manager for ""
	I1211 23:53:54.769265  273363 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1211 23:53:54.769275  273363 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1211 23:53:54.769327  273363 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-680529 NodeName:addons-680529 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1211 23:53:54.769484  273363 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-680529"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1211 23:53:54.769585  273363 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1211 23:53:54.778379  273363 binaries.go:44] Found k8s binaries, skipping transfer
	I1211 23:53:54.778503  273363 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1211 23:53:54.787651  273363 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1211 23:53:54.806004  273363 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1211 23:53:54.824088  273363 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2287 bytes)
	I1211 23:53:54.842000  273363 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1211 23:53:54.845520  273363 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1211 23:53:54.856161  273363 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 23:53:54.934881  273363 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1211 23:53:54.948546  273363 certs.go:68] Setting up /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529 for IP: 192.168.49.2
	I1211 23:53:54.948614  273363 certs.go:194] generating shared ca certs ...
	I1211 23:53:54.948644  273363 certs.go:226] acquiring lock for ca certs: {Name:mk75a7b7ee8a94f6f2a55504cc54c197a74cc120 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:53:54.948814  273363 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20083-267093/.minikube/ca.key
	I1211 23:53:55.384596  273363 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20083-267093/.minikube/ca.crt ...
	I1211 23:53:55.384625  273363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-267093/.minikube/ca.crt: {Name:mk3e52d092dcef5787bc435861f1608c2f947114 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:53:55.384845  273363 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20083-267093/.minikube/ca.key ...
	I1211 23:53:55.384861  273363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-267093/.minikube/ca.key: {Name:mk9187b90a55d2e1f4f24ea98738619dd0fa0832 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:53:55.384950  273363 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20083-267093/.minikube/proxy-client-ca.key
	I1211 23:53:55.792580  273363 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20083-267093/.minikube/proxy-client-ca.crt ...
	I1211 23:53:55.792613  273363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-267093/.minikube/proxy-client-ca.crt: {Name:mkf2a57bfa0628fdc29088ad9a2c197184da2ce7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:53:55.792796  273363 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20083-267093/.minikube/proxy-client-ca.key ...
	I1211 23:53:55.792809  273363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-267093/.minikube/proxy-client-ca.key: {Name:mk0abeb53d31b4abfeac54a5df449d9e6224a2de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:53:55.792898  273363 certs.go:256] generating profile certs ...
	I1211 23:53:55.792968  273363 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/client.key
	I1211 23:53:55.792995  273363 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/client.crt with IP's: []
	I1211 23:53:56.054981  273363 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/client.crt ...
	I1211 23:53:56.055017  273363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/client.crt: {Name:mk35f1b1673ed8179bd483f9acbb1465f024781b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:53:56.055201  273363 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/client.key ...
	I1211 23:53:56.055215  273363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/client.key: {Name:mk73fb31a0e6881eae06c356d452610512a09ae6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:53:56.055299  273363 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/apiserver.key.1f96b591
	I1211 23:53:56.055318  273363 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/apiserver.crt.1f96b591 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1211 23:53:56.406908  273363 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/apiserver.crt.1f96b591 ...
	I1211 23:53:56.406942  273363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/apiserver.crt.1f96b591: {Name:mk5ef5b50bf5b0af6aa2229ad8cc8b616cd41b50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:53:56.407123  273363 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/apiserver.key.1f96b591 ...
	I1211 23:53:56.407137  273363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/apiserver.key.1f96b591: {Name:mkd8712d30c56a797038b17a448278615ff35eef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:53:56.407221  273363 certs.go:381] copying /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/apiserver.crt.1f96b591 -> /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/apiserver.crt
	I1211 23:53:56.407308  273363 certs.go:385] copying /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/apiserver.key.1f96b591 -> /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/apiserver.key
	I1211 23:53:56.407386  273363 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/proxy-client.key
	I1211 23:53:56.407414  273363 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/proxy-client.crt with IP's: []
	I1211 23:53:57.352976  273363 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/proxy-client.crt ...
	I1211 23:53:57.353015  273363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/proxy-client.crt: {Name:mk2a4928ff8b166e35c1fb625d8ba5ea1ee5a2cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:53:57.353213  273363 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/proxy-client.key ...
	I1211 23:53:57.353232  273363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/proxy-client.key: {Name:mk0e895a5a8dd2ede233ffbf83a9fca190c82f89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:53:57.353429  273363 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-267093/.minikube/certs/ca-key.pem (1679 bytes)
	I1211 23:53:57.353474  273363 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-267093/.minikube/certs/ca.pem (1082 bytes)
	I1211 23:53:57.353504  273363 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-267093/.minikube/certs/cert.pem (1123 bytes)
	I1211 23:53:57.353533  273363 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-267093/.minikube/certs/key.pem (1679 bytes)
	I1211 23:53:57.354204  273363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-267093/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1211 23:53:57.385320  273363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-267093/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1211 23:53:57.410544  273363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-267093/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1211 23:53:57.434244  273363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-267093/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1211 23:53:57.458477  273363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1211 23:53:57.482740  273363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1211 23:53:57.506097  273363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1211 23:53:57.529883  273363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1211 23:53:57.553059  273363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-267093/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1211 23:53:57.576861  273363 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1211 23:53:57.594879  273363 ssh_runner.go:195] Run: openssl version
	I1211 23:53:57.600352  273363 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1211 23:53:57.609749  273363 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1211 23:53:57.613366  273363 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:53 /usr/share/ca-certificates/minikubeCA.pem
	I1211 23:53:57.613432  273363 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1211 23:53:57.620526  273363 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1211 23:53:57.629800  273363 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1211 23:53:57.633145  273363 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1211 23:53:57.633216  273363 kubeadm.go:392] StartCluster: {Name:addons-680529 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-680529 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 23:53:57.633303  273363 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1211 23:53:57.633360  273363 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1211 23:53:57.670503  273363 cri.go:89] found id: ""
	I1211 23:53:57.670593  273363 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1211 23:53:57.679270  273363 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1211 23:53:57.687993  273363 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1211 23:53:57.688081  273363 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1211 23:53:57.696794  273363 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1211 23:53:57.696815  273363 kubeadm.go:157] found existing configuration files:
	
	I1211 23:53:57.696865  273363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1211 23:53:57.705251  273363 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1211 23:53:57.705316  273363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1211 23:53:57.713714  273363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1211 23:53:57.722218  273363 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1211 23:53:57.722325  273363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1211 23:53:57.730419  273363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1211 23:53:57.739195  273363 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1211 23:53:57.739290  273363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1211 23:53:57.747508  273363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1211 23:53:57.756548  273363 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1211 23:53:57.756629  273363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1211 23:53:57.765360  273363 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1211 23:53:57.808245  273363 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1211 23:53:57.808619  273363 kubeadm.go:310] [preflight] Running pre-flight checks
	I1211 23:53:57.827671  273363 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I1211 23:53:57.827750  273363 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1072-aws
	I1211 23:53:57.827792  273363 kubeadm.go:310] OS: Linux
	I1211 23:53:57.827842  273363 kubeadm.go:310] CGROUPS_CPU: enabled
	I1211 23:53:57.827899  273363 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I1211 23:53:57.827950  273363 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I1211 23:53:57.828001  273363 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I1211 23:53:57.828053  273363 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I1211 23:53:57.828105  273363 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I1211 23:53:57.828154  273363 kubeadm.go:310] CGROUPS_PIDS: enabled
	I1211 23:53:57.828205  273363 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I1211 23:53:57.828254  273363 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I1211 23:53:57.893107  273363 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1211 23:53:57.893228  273363 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1211 23:53:57.893328  273363 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1211 23:53:57.900764  273363 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1211 23:53:57.905172  273363 out.go:235]   - Generating certificates and keys ...
	I1211 23:53:57.905274  273363 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1211 23:53:57.905339  273363 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1211 23:53:58.203706  273363 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1211 23:53:58.960817  273363 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1211 23:53:59.367623  273363 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1211 23:53:59.616824  273363 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1211 23:54:00.291289  273363 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1211 23:54:00.306669  273363 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-680529 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1211 23:54:01.098211  273363 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1211 23:54:01.098741  273363 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-680529 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1211 23:54:01.492139  273363 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1211 23:54:02.121408  273363 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1211 23:54:02.469189  273363 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1211 23:54:02.469407  273363 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1211 23:54:03.606420  273363 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1211 23:54:04.316355  273363 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1211 23:54:04.781315  273363 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1211 23:54:05.174239  273363 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1211 23:54:05.680509  273363 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1211 23:54:05.681202  273363 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1211 23:54:05.684132  273363 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1211 23:54:05.688022  273363 out.go:235]   - Booting up control plane ...
	I1211 23:54:05.688135  273363 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1211 23:54:05.688222  273363 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1211 23:54:05.688294  273363 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1211 23:54:05.697930  273363 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1211 23:54:05.704196  273363 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1211 23:54:05.704410  273363 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1211 23:54:05.796597  273363 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1211 23:54:05.796725  273363 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1211 23:54:07.298003  273363 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.50151497s
	I1211 23:54:07.298096  273363 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1211 23:54:13.799463  273363 kubeadm.go:310] [api-check] The API server is healthy after 6.501399768s
	I1211 23:54:13.820040  273363 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1211 23:54:13.835028  273363 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1211 23:54:13.862093  273363 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1211 23:54:13.862331  273363 kubeadm.go:310] [mark-control-plane] Marking the node addons-680529 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1211 23:54:13.873106  273363 kubeadm.go:310] [bootstrap-token] Using token: 8wpob4.wstfq9fo1o28lkg0
	I1211 23:54:13.875635  273363 out.go:235]   - Configuring RBAC rules ...
	I1211 23:54:13.875768  273363 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1211 23:54:13.881078  273363 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1211 23:54:13.889362  273363 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1211 23:54:13.893325  273363 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1211 23:54:13.902259  273363 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1211 23:54:13.906515  273363 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1211 23:54:14.207761  273363 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1211 23:54:14.637430  273363 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1211 23:54:15.207319  273363 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1211 23:54:15.208354  273363 kubeadm.go:310] 
	I1211 23:54:15.208427  273363 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1211 23:54:15.208433  273363 kubeadm.go:310] 
	I1211 23:54:15.208510  273363 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1211 23:54:15.208515  273363 kubeadm.go:310] 
	I1211 23:54:15.208540  273363 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1211 23:54:15.208606  273363 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1211 23:54:15.208657  273363 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1211 23:54:15.208662  273363 kubeadm.go:310] 
	I1211 23:54:15.208721  273363 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1211 23:54:15.208727  273363 kubeadm.go:310] 
	I1211 23:54:15.208774  273363 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1211 23:54:15.208779  273363 kubeadm.go:310] 
	I1211 23:54:15.208831  273363 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1211 23:54:15.208905  273363 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1211 23:54:15.208975  273363 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1211 23:54:15.208980  273363 kubeadm.go:310] 
	I1211 23:54:15.209064  273363 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1211 23:54:15.209140  273363 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1211 23:54:15.209145  273363 kubeadm.go:310] 
	I1211 23:54:15.209228  273363 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 8wpob4.wstfq9fo1o28lkg0 \
	I1211 23:54:15.209331  273363 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:907f7936d896e8031b15287260859487f794bdb8c0f9e6400d13c7899dae4a1b \
	I1211 23:54:15.209697  273363 kubeadm.go:310] 	--control-plane 
	I1211 23:54:15.209721  273363 kubeadm.go:310] 
	I1211 23:54:15.209807  273363 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1211 23:54:15.209813  273363 kubeadm.go:310] 
	I1211 23:54:15.209894  273363 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 8wpob4.wstfq9fo1o28lkg0 \
	I1211 23:54:15.209996  273363 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:907f7936d896e8031b15287260859487f794bdb8c0f9e6400d13c7899dae4a1b 
	I1211 23:54:15.212456  273363 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1072-aws\n", err: exit status 1
	I1211 23:54:15.212586  273363 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1211 23:54:15.212607  273363 cni.go:84] Creating CNI manager for ""
	I1211 23:54:15.212616  273363 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1211 23:54:15.215530  273363 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1211 23:54:15.218324  273363 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1211 23:54:15.222074  273363 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1211 23:54:15.222097  273363 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1211 23:54:15.241894  273363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1211 23:54:15.545204  273363 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1211 23:54:15.545347  273363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:54:15.545425  273363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-680529 minikube.k8s.io/updated_at=2024_12_11T23_54_15_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7fa9733e2a94b345defa62bfa3d9dc225cfbd458 minikube.k8s.io/name=addons-680529 minikube.k8s.io/primary=true
	I1211 23:54:15.727321  273363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:54:15.727408  273363 ops.go:34] apiserver oom_adj: -16
	I1211 23:54:16.227619  273363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:54:16.727459  273363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:54:17.227657  273363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:54:17.728243  273363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:54:18.227983  273363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:54:18.727405  273363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:54:19.227395  273363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:54:19.728209  273363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:54:19.858074  273363 kubeadm.go:1113] duration metric: took 4.31277244s to wait for elevateKubeSystemPrivileges
	I1211 23:54:19.858108  273363 kubeadm.go:394] duration metric: took 22.224914095s to StartCluster
	I1211 23:54:19.858126  273363 settings.go:142] acquiring lock: {Name:mk814eae3eecf1bc157101f19f818cc25695a8bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:54:19.858269  273363 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20083-267093/kubeconfig
	I1211 23:54:19.858719  273363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-267093/kubeconfig: {Name:mk58cf12cb3ced247d8613ba49b2fae0b50590ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:54:19.858926  273363 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1211 23:54:19.859083  273363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1211 23:54:19.859347  273363 config.go:182] Loaded profile config "addons-680529": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1211 23:54:19.859395  273363 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1211 23:54:19.859474  273363 addons.go:69] Setting yakd=true in profile "addons-680529"
	I1211 23:54:19.859488  273363 addons.go:234] Setting addon yakd=true in "addons-680529"
	I1211 23:54:19.859513  273363 host.go:66] Checking if "addons-680529" exists ...
	I1211 23:54:19.859741  273363 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-680529"
	I1211 23:54:19.859760  273363 addons.go:234] Setting addon amd-gpu-device-plugin=true in "addons-680529"
	I1211 23:54:19.859781  273363 host.go:66] Checking if "addons-680529" exists ...
	I1211 23:54:19.860307  273363 cli_runner.go:164] Run: docker container inspect addons-680529 --format={{.State.Status}}
	I1211 23:54:19.860783  273363 addons.go:69] Setting cloud-spanner=true in profile "addons-680529"
	I1211 23:54:19.860799  273363 addons.go:234] Setting addon cloud-spanner=true in "addons-680529"
	I1211 23:54:19.860822  273363 host.go:66] Checking if "addons-680529" exists ...
	I1211 23:54:19.861222  273363 cli_runner.go:164] Run: docker container inspect addons-680529 --format={{.State.Status}}
	I1211 23:54:19.862288  273363 cli_runner.go:164] Run: docker container inspect addons-680529 --format={{.State.Status}}
	I1211 23:54:19.864908  273363 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-680529"
	I1211 23:54:19.864979  273363 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-680529"
	I1211 23:54:19.865009  273363 host.go:66] Checking if "addons-680529" exists ...
	I1211 23:54:19.865461  273363 cli_runner.go:164] Run: docker container inspect addons-680529 --format={{.State.Status}}
	I1211 23:54:19.870334  273363 addons.go:69] Setting default-storageclass=true in profile "addons-680529"
	I1211 23:54:19.870378  273363 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-680529"
	I1211 23:54:19.870714  273363 cli_runner.go:164] Run: docker container inspect addons-680529 --format={{.State.Status}}
	I1211 23:54:19.871110  273363 out.go:177] * Verifying Kubernetes components...
	I1211 23:54:19.874628  273363 addons.go:69] Setting registry=true in profile "addons-680529"
	I1211 23:54:19.874656  273363 addons.go:234] Setting addon registry=true in "addons-680529"
	I1211 23:54:19.874694  273363 host.go:66] Checking if "addons-680529" exists ...
	I1211 23:54:19.875182  273363 cli_runner.go:164] Run: docker container inspect addons-680529 --format={{.State.Status}}
	I1211 23:54:19.882981  273363 addons.go:69] Setting storage-provisioner=true in profile "addons-680529"
	I1211 23:54:19.883017  273363 addons.go:234] Setting addon storage-provisioner=true in "addons-680529"
	I1211 23:54:19.883060  273363 host.go:66] Checking if "addons-680529" exists ...
	I1211 23:54:19.883553  273363 cli_runner.go:164] Run: docker container inspect addons-680529 --format={{.State.Status}}
	I1211 23:54:19.891195  273363 addons.go:69] Setting gcp-auth=true in profile "addons-680529"
	I1211 23:54:19.891239  273363 mustload.go:65] Loading cluster: addons-680529
	I1211 23:54:19.891449  273363 config.go:182] Loaded profile config "addons-680529": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1211 23:54:19.891722  273363 cli_runner.go:164] Run: docker container inspect addons-680529 --format={{.State.Status}}
	I1211 23:54:19.902331  273363 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-680529"
	I1211 23:54:19.902376  273363 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-680529"
	I1211 23:54:19.902622  273363 addons.go:69] Setting ingress=true in profile "addons-680529"
	I1211 23:54:19.902649  273363 addons.go:234] Setting addon ingress=true in "addons-680529"
	I1211 23:54:19.902695  273363 host.go:66] Checking if "addons-680529" exists ...
	I1211 23:54:19.902739  273363 cli_runner.go:164] Run: docker container inspect addons-680529 --format={{.State.Status}}
	I1211 23:54:19.903070  273363 addons.go:69] Setting ingress-dns=true in profile "addons-680529"
	I1211 23:54:19.903088  273363 addons.go:234] Setting addon ingress-dns=true in "addons-680529"
	I1211 23:54:19.903152  273363 host.go:66] Checking if "addons-680529" exists ...
	I1211 23:54:19.918246  273363 addons.go:69] Setting volcano=true in profile "addons-680529"
	I1211 23:54:19.918284  273363 addons.go:234] Setting addon volcano=true in "addons-680529"
	I1211 23:54:19.918332  273363 host.go:66] Checking if "addons-680529" exists ...
	I1211 23:54:19.918822  273363 cli_runner.go:164] Run: docker container inspect addons-680529 --format={{.State.Status}}
	I1211 23:54:19.923307  273363 addons.go:69] Setting inspektor-gadget=true in profile "addons-680529"
	I1211 23:54:19.928233  273363 addons.go:234] Setting addon inspektor-gadget=true in "addons-680529"
	I1211 23:54:19.928313  273363 host.go:66] Checking if "addons-680529" exists ...
	I1211 23:54:19.928076  273363 addons.go:69] Setting metrics-server=true in profile "addons-680529"
	I1211 23:54:19.928483  273363 addons.go:234] Setting addon metrics-server=true in "addons-680529"
	I1211 23:54:19.928505  273363 host.go:66] Checking if "addons-680529" exists ...
	I1211 23:54:19.929020  273363 cli_runner.go:164] Run: docker container inspect addons-680529 --format={{.State.Status}}
	I1211 23:54:19.937426  273363 addons.go:69] Setting volumesnapshots=true in profile "addons-680529"
	I1211 23:54:19.937730  273363 addons.go:234] Setting addon volumesnapshots=true in "addons-680529"
	I1211 23:54:19.937895  273363 host.go:66] Checking if "addons-680529" exists ...
	I1211 23:54:19.928096  273363 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-680529"
	I1211 23:54:19.957797  273363 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-680529"
	I1211 23:54:19.957855  273363 host.go:66] Checking if "addons-680529" exists ...
	I1211 23:54:19.928173  273363 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 23:54:20.005612  273363 cli_runner.go:164] Run: docker container inspect addons-680529 --format={{.State.Status}}
	I1211 23:54:20.043275  273363 cli_runner.go:164] Run: docker container inspect addons-680529 --format={{.State.Status}}
	I1211 23:54:20.052925  273363 cli_runner.go:164] Run: docker container inspect addons-680529 --format={{.State.Status}}
	I1211 23:54:20.065856  273363 cli_runner.go:164] Run: docker container inspect addons-680529 --format={{.State.Status}}
	I1211 23:54:20.082385  273363 cli_runner.go:164] Run: docker container inspect addons-680529 --format={{.State.Status}}
	I1211 23:54:20.082869  273363 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1211 23:54:20.099569  273363 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.25
	I1211 23:54:20.104206  273363 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1211 23:54:20.104236  273363 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1211 23:54:20.104314  273363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-680529
	I1211 23:54:20.127127  273363 addons.go:234] Setting addon default-storageclass=true in "addons-680529"
	I1211 23:54:20.127192  273363 host.go:66] Checking if "addons-680529" exists ...
	I1211 23:54:20.129146  273363 cli_runner.go:164] Run: docker container inspect addons-680529 --format={{.State.Status}}
	I1211 23:54:20.134985  273363 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1211 23:54:20.135009  273363 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1211 23:54:20.135086  273363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-680529
	I1211 23:54:20.152035  273363 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1211 23:54:20.152295  273363 host.go:66] Checking if "addons-680529" exists ...
	W1211 23:54:20.169129  273363 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1211 23:54:20.172055  273363 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1211 23:54:20.172076  273363 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1211 23:54:20.172554  273363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-680529
	I1211 23:54:20.173775  273363 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1211 23:54:20.174484  273363 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I1211 23:54:20.187839  273363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1211 23:54:20.191228  273363 out.go:177]   - Using image docker.io/registry:2.8.3
	I1211 23:54:20.196155  273363 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1211 23:54:20.198361  273363 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1211 23:54:20.198400  273363 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1211 23:54:20.198476  273363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-680529
	I1211 23:54:20.208334  273363 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 23:54:20.208410  273363 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1211 23:54:20.208517  273363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-680529
	I1211 23:54:20.228804  273363 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1211 23:54:20.233197  273363 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1211 23:54:20.234507  273363 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-680529"
	I1211 23:54:20.234549  273363 host.go:66] Checking if "addons-680529" exists ...
	I1211 23:54:20.235517  273363 cli_runner.go:164] Run: docker container inspect addons-680529 --format={{.State.Status}}
	I1211 23:54:20.249752  273363 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1211 23:54:20.252415  273363 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1211 23:54:20.252444  273363 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1211 23:54:20.252511  273363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-680529
	I1211 23:54:20.303850  273363 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I1211 23:54:20.305739  273363 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1211 23:54:20.305820  273363 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.35.0
	I1211 23:54:20.305852  273363 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1211 23:54:20.306936  273363 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1211 23:54:20.306956  273363 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1211 23:54:20.307019  273363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-680529
	I1211 23:54:20.316965  273363 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1211 23:54:20.317000  273363 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I1211 23:54:20.317077  273363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-680529
	I1211 23:54:20.339003  273363 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1211 23:54:20.339028  273363 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1211 23:54:20.339111  273363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-680529
	I1211 23:54:20.346261  273363 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1211 23:54:20.347042  273363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/20083-267093/.minikube/machines/addons-680529/id_rsa Username:docker}
	I1211 23:54:20.349164  273363 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1211 23:54:20.350908  273363 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1211 23:54:20.354350  273363 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1211 23:54:20.354491  273363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/20083-267093/.minikube/machines/addons-680529/id_rsa Username:docker}
	I1211 23:54:20.354989  273363 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1211 23:54:20.355006  273363 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1211 23:54:20.355079  273363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-680529
	I1211 23:54:20.389872  273363 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1211 23:54:20.394317  273363 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1211 23:54:20.396369  273363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/20083-267093/.minikube/machines/addons-680529/id_rsa Username:docker}
	I1211 23:54:20.398242  273363 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1211 23:54:20.406311  273363 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I1211 23:54:20.406442  273363 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1211 23:54:20.406454  273363 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1211 23:54:20.406528  273363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-680529
	I1211 23:54:20.406806  273363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/20083-267093/.minikube/machines/addons-680529/id_rsa Username:docker}
	I1211 23:54:20.409528  273363 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1211 23:54:20.409555  273363 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1211 23:54:20.409620  273363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-680529
	I1211 23:54:20.421347  273363 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1211 23:54:20.421368  273363 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1211 23:54:20.421432  273363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-680529
	I1211 23:54:20.443059  273363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/20083-267093/.minikube/machines/addons-680529/id_rsa Username:docker}
	I1211 23:54:20.447038  273363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/20083-267093/.minikube/machines/addons-680529/id_rsa Username:docker}
	I1211 23:54:20.485389  273363 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1211 23:54:20.487983  273363 out.go:177]   - Using image docker.io/busybox:stable
	I1211 23:54:20.495688  273363 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1211 23:54:20.495710  273363 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1211 23:54:20.495774  273363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-680529
	I1211 23:54:20.558391  273363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/20083-267093/.minikube/machines/addons-680529/id_rsa Username:docker}
	I1211 23:54:20.559060  273363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/20083-267093/.minikube/machines/addons-680529/id_rsa Username:docker}
	I1211 23:54:20.575533  273363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/20083-267093/.minikube/machines/addons-680529/id_rsa Username:docker}
	I1211 23:54:20.577426  273363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/20083-267093/.minikube/machines/addons-680529/id_rsa Username:docker}
	I1211 23:54:20.587210  273363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/20083-267093/.minikube/machines/addons-680529/id_rsa Username:docker}
	I1211 23:54:20.604184  273363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/20083-267093/.minikube/machines/addons-680529/id_rsa Username:docker}
	I1211 23:54:20.609850  273363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/20083-267093/.minikube/machines/addons-680529/id_rsa Username:docker}
	I1211 23:54:20.630188  273363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/20083-267093/.minikube/machines/addons-680529/id_rsa Username:docker}
	I1211 23:54:20.912189  273363 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1211 23:54:20.912215  273363 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1211 23:54:20.923688  273363 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1211 23:54:20.923715  273363 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1211 23:54:20.933128  273363 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1211 23:54:20.969541  273363 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1211 23:54:20.969566  273363 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1211 23:54:20.982102  273363 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 23:54:21.001243  273363 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1211 23:54:21.001270  273363 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1211 23:54:21.035929  273363 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1211 23:54:21.035951  273363 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14576 bytes)
	I1211 23:54:21.090449  273363 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1211 23:54:21.096193  273363 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1211 23:54:21.132833  273363 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1211 23:54:21.136833  273363 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1211 23:54:21.136906  273363 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1211 23:54:21.140172  273363 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1211 23:54:21.172014  273363 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1211 23:54:21.172086  273363 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1211 23:54:21.185687  273363 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1211 23:54:21.190398  273363 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1211 23:54:21.190470  273363 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1211 23:54:21.194533  273363 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1211 23:54:21.198899  273363 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.236594248s)
	I1211 23:54:21.199011  273363 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1211 23:54:21.203589  273363 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1211 23:54:21.203664  273363 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1211 23:54:21.209827  273363 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1211 23:54:21.237417  273363 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1211 23:54:21.347532  273363 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1211 23:54:21.347610  273363 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1211 23:54:21.419329  273363 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1211 23:54:21.419407  273363 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1211 23:54:21.447244  273363 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1211 23:54:21.447321  273363 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1211 23:54:21.457742  273363 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1211 23:54:21.457819  273363 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1211 23:54:21.576089  273363 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1211 23:54:21.607276  273363 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1211 23:54:21.607350  273363 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1211 23:54:21.638898  273363 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1211 23:54:21.638979  273363 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1211 23:54:21.663496  273363 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1211 23:54:21.663566  273363 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1211 23:54:21.809177  273363 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1211 23:54:21.850817  273363 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1211 23:54:21.850895  273363 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1211 23:54:21.854635  273363 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1211 23:54:21.854712  273363 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1211 23:54:21.968757  273363 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1211 23:54:21.968779  273363 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1211 23:54:21.976710  273363 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1211 23:54:21.976781  273363 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1211 23:54:22.048462  273363 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1211 23:54:22.064980  273363 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1211 23:54:22.065054  273363 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1211 23:54:22.136755  273363 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1211 23:54:22.136835  273363 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1211 23:54:22.195297  273363 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1211 23:54:22.195370  273363 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1211 23:54:22.351779  273363 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1211 23:54:22.351842  273363 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1211 23:54:22.522050  273363 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1211 23:54:22.522132  273363 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1211 23:54:22.697289  273363 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1211 23:54:23.085069  273363 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.897188221s)
	I1211 23:54:23.085163  273363 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1211 23:54:24.169994  273363 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-680529" context rescaled to 1 replicas
	I1211 23:54:25.128048  273363 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.194881013s)
	I1211 23:54:26.109492  273363 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.127349013s)
	I1211 23:54:26.109590  273363 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (5.019120876s)
	I1211 23:54:26.109665  273363 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.976760436s)
	I1211 23:54:26.109890  273363 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.969648397s)
	I1211 23:54:26.109927  273363 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.924180177s)
	I1211 23:54:26.109985  273363 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.013414442s)
	I1211 23:54:26.110004  273363 addons.go:475] Verifying addon registry=true in "addons-680529"
	I1211 23:54:26.112957  273363 out.go:177] * Verifying registry addon...
	I1211 23:54:26.116446  273363 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1211 23:54:26.124038  273363 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.929431734s)
	I1211 23:54:26.124221  273363 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.925185617s)
	I1211 23:54:26.125071  273363 node_ready.go:35] waiting up to 6m0s for node "addons-680529" to be "Ready" ...
	I1211 23:54:26.162992  273363 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1211 23:54:26.163022  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1211 23:54:26.191062  273363 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1211 23:54:26.391263  273363 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (5.181344081s)
	I1211 23:54:26.660156  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:27.123710  273363 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.547535075s)
	I1211 23:54:27.123768  273363 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.314507093s)
	I1211 23:54:27.123789  273363 addons.go:475] Verifying addon metrics-server=true in "addons-680529"
	I1211 23:54:27.123857  273363 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.886372813s)
	I1211 23:54:27.123881  273363 addons.go:475] Verifying addon ingress=true in "addons-680529"
	I1211 23:54:27.126739  273363 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-680529 service yakd-dashboard -n yakd-dashboard
	
	I1211 23:54:27.126934  273363 out.go:177] * Verifying ingress addon...
	I1211 23:54:27.130674  273363 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1211 23:54:27.159300  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:27.160252  273363 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1211 23:54:27.160302  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:27.296838  273363 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.248278878s)
	W1211 23:54:27.296925  273363 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1211 23:54:27.296967  273363 retry.go:31] will retry after 165.916789ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1211 23:54:27.463414  273363 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1211 23:54:27.620767  273363 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.923372321s)
	I1211 23:54:27.620851  273363 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-680529"
	I1211 23:54:27.623241  273363 out.go:177] * Verifying csi-hostpath-driver addon...
	I1211 23:54:27.625994  273363 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1211 23:54:27.639982  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:27.641488  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:27.642967  273363 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1211 23:54:27.643029  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:28.120366  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:28.130214  273363 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1211 23:54:28.130737  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:28.130713  273363 node_ready.go:53] node "addons-680529" has status "Ready":"False"
	I1211 23:54:28.134768  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:28.620879  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:28.630618  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:28.634236  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:29.119893  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:29.129853  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:29.134659  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:29.620198  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:29.629853  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:29.634584  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:29.779161  273363 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1211 23:54:29.779244  273363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-680529
	I1211 23:54:29.796490  273363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/20083-267093/.minikube/machines/addons-680529/id_rsa Username:docker}
	I1211 23:54:29.907471  273363 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1211 23:54:29.940138  273363 addons.go:234] Setting addon gcp-auth=true in "addons-680529"
	I1211 23:54:29.940187  273363 host.go:66] Checking if "addons-680529" exists ...
	I1211 23:54:29.940658  273363 cli_runner.go:164] Run: docker container inspect addons-680529 --format={{.State.Status}}
	I1211 23:54:29.960885  273363 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1211 23:54:29.960941  273363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-680529
	I1211 23:54:29.983958  273363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/20083-267093/.minikube/machines/addons-680529/id_rsa Username:docker}
	I1211 23:54:30.126582  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:30.139769  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:30.144397  273363 node_ready.go:53] node "addons-680529" has status "Ready":"False"
	I1211 23:54:30.144745  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:30.297057  273363 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.833545548s)
	I1211 23:54:30.300211  273363 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1211 23:54:30.302810  273363 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1211 23:54:30.305274  273363 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1211 23:54:30.305307  273363 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1211 23:54:30.323881  273363 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1211 23:54:30.323909  273363 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1211 23:54:30.343795  273363 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1211 23:54:30.343857  273363 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1211 23:54:30.362858  273363 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1211 23:54:30.621377  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:30.633146  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:30.636257  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:30.884218  273363 addons.go:475] Verifying addon gcp-auth=true in "addons-680529"
	I1211 23:54:30.888490  273363 out.go:177] * Verifying gcp-auth addon...
	I1211 23:54:30.891918  273363 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1211 23:54:30.896094  273363 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1211 23:54:30.896117  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:31.120559  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:31.131431  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:31.134698  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:31.395158  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:31.619953  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:31.629922  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:31.634451  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:31.895950  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:32.120088  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:32.131178  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:32.134177  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:32.395608  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:32.622022  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:32.628179  273363 node_ready.go:53] node "addons-680529" has status "Ready":"False"
	I1211 23:54:32.629859  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:32.634250  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:32.895683  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:33.120201  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:33.132144  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:33.134657  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:33.395223  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:33.622327  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:33.629746  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:33.634825  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:33.895406  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:34.119591  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:34.130235  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:34.134783  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:34.395355  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:34.620458  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:34.629309  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:34.629786  273363 node_ready.go:53] node "addons-680529" has status "Ready":"False"
	I1211 23:54:34.634646  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:34.895224  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:35.120660  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:35.131151  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:35.134708  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:35.395016  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:35.620240  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:35.629871  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:35.634274  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:35.895349  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:36.119980  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:36.131200  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:36.134540  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:36.395013  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:36.620228  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:36.629934  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:36.630491  273363 node_ready.go:53] node "addons-680529" has status "Ready":"False"
	I1211 23:54:36.634931  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:36.895268  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:37.120240  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:37.131559  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:37.134485  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:37.395872  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:37.620010  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:37.630131  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:37.635029  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:37.895248  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:38.119372  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:38.129343  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:38.140385  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:38.395567  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:38.620763  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:38.630409  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:38.634040  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:38.895271  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:39.119844  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:39.128896  273363 node_ready.go:53] node "addons-680529" has status "Ready":"False"
	I1211 23:54:39.131666  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:39.134162  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:39.395146  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:39.620412  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:39.630071  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:39.634901  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:39.895327  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:40.120599  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:40.130586  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:40.135612  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:40.395283  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:40.620520  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:40.630695  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:40.634268  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:40.895600  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:41.120286  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:41.130513  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:41.135000  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:41.395376  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:41.619775  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:41.630091  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:41.630508  273363 node_ready.go:53] node "addons-680529" has status "Ready":"False"
	I1211 23:54:41.634783  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:41.895405  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:42.120167  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:42.131995  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:42.135862  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:42.395583  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:42.620367  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:42.630534  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:42.634559  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:42.894911  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:43.119916  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:43.132790  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:43.134674  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:43.395046  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:43.619538  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:43.630259  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:43.634367  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:43.895722  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:44.119746  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:44.128625  273363 node_ready.go:53] node "addons-680529" has status "Ready":"False"
	I1211 23:54:44.130450  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:44.134906  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:44.395344  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:44.620371  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:44.630726  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:44.634186  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:44.895973  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:45.120942  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:45.143672  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:45.144132  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:45.395279  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:45.620438  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:45.629807  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:45.634575  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:45.895989  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:46.120517  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:46.128821  273363 node_ready.go:53] node "addons-680529" has status "Ready":"False"
	I1211 23:54:46.131700  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:46.134332  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:46.395878  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:46.620353  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:46.629825  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:46.634660  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:46.894849  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:47.119521  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:47.131303  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:47.134563  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:47.395702  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:47.619551  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:47.629880  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:47.634614  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:47.895315  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:48.120091  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:48.129638  273363 node_ready.go:53] node "addons-680529" has status "Ready":"False"
	I1211 23:54:48.134296  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:48.136076  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:48.395444  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:48.620380  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:48.631204  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:48.634400  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:48.899608  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:49.119894  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:49.129991  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:49.134872  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:49.394980  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:49.620716  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:49.629579  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:49.635153  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:49.895562  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:50.119884  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:50.130753  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:50.131359  273363 node_ready.go:53] node "addons-680529" has status "Ready":"False"
	I1211 23:54:50.134528  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:50.395846  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:50.621379  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:50.629374  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:50.634318  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:50.895532  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:51.120423  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:51.129594  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:51.135296  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:51.395717  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:51.619730  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:51.630394  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:51.634107  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:51.895582  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:52.119900  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:52.130581  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:52.134302  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:52.395634  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:52.619878  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:52.628740  273363 node_ready.go:53] node "addons-680529" has status "Ready":"False"
	I1211 23:54:52.630331  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:52.634382  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:52.895516  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:53.119629  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:53.136027  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:53.136572  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:53.395551  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:53.620416  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:53.630418  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:53.636086  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:53.895158  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:54.120307  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:54.129598  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:54.134661  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:54.395174  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:54.620315  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:54.630459  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:54.634294  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:54.895655  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:55.120431  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:55.128842  273363 node_ready.go:53] node "addons-680529" has status "Ready":"False"
	I1211 23:54:55.129883  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:55.134780  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:55.395218  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:55.619270  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:55.630124  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:55.634905  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:55.895814  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:56.119869  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:56.130867  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:56.133986  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:56.395730  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:56.620096  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:56.631020  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:56.634376  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:56.895475  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:57.119960  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:57.130077  273363 node_ready.go:53] node "addons-680529" has status "Ready":"False"
	I1211 23:54:57.131753  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:57.134603  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:57.396002  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:57.620071  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:57.631816  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:57.634672  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:57.895175  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:58.120449  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:58.136294  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:58.138221  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:58.395610  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:58.620108  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:58.629277  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:58.635091  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:58.895594  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:59.119685  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:59.130987  273363 node_ready.go:53] node "addons-680529" has status "Ready":"False"
	I1211 23:54:59.131169  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:59.133995  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:59.395539  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:54:59.620597  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:54:59.630369  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:54:59.634908  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:54:59.896212  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:00.120910  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:55:00.162727  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:00.164141  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:00.395606  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:00.621059  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:55:00.631181  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:00.636794  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:00.895207  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:01.120054  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:55:01.131149  273363 node_ready.go:53] node "addons-680529" has status "Ready":"False"
	I1211 23:55:01.132655  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:01.136958  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:01.395414  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:01.621119  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:55:01.630870  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:01.634711  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:01.895372  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:02.120939  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:55:02.130976  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:02.134728  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:02.395443  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:02.620203  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:55:02.629678  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:02.634527  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:02.896086  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:03.119866  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:55:03.130202  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:03.135406  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:03.395947  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:03.620025  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:55:03.628392  273363 node_ready.go:53] node "addons-680529" has status "Ready":"False"
	I1211 23:55:03.629802  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:03.635017  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:03.895656  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:04.120009  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:55:04.131263  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:04.134483  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:04.395851  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:04.619674  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:55:04.630268  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:04.634809  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:04.895438  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:05.119749  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:55:05.134404  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:05.135700  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:05.395361  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:05.619579  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:55:05.628966  273363 node_ready.go:53] node "addons-680529" has status "Ready":"False"
	I1211 23:55:05.630309  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:05.633874  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:05.895128  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:06.120083  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:55:06.131639  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:06.134571  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:06.395021  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:06.619962  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:55:06.630332  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:06.634256  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:06.895761  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:07.139764  273363 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1211 23:55:07.139788  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:55:07.143877  273363 node_ready.go:49] node "addons-680529" has status "Ready":"True"
	I1211 23:55:07.143901  273363 node_ready.go:38] duration metric: took 41.018799567s for node "addons-680529" to be "Ready" ...
	I1211 23:55:07.143912  273363 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1211 23:55:07.163487  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:07.165850  273363 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1211 23:55:07.165912  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:07.170615  273363 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ltfkm" in "kube-system" namespace to be "Ready" ...
	I1211 23:55:07.484946  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:07.620593  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:55:07.631056  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:07.634943  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:07.897339  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:08.122493  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:55:08.223988  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:08.225144  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:08.396538  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:08.620788  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:55:08.631216  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:08.634666  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:08.895824  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:09.120424  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:55:09.132452  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:09.137089  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:09.178527  273363 pod_ready.go:93] pod "coredns-7c65d6cfc9-ltfkm" in "kube-system" namespace has status "Ready":"True"
	I1211 23:55:09.178601  273363 pod_ready.go:82] duration metric: took 2.0079513s for pod "coredns-7c65d6cfc9-ltfkm" in "kube-system" namespace to be "Ready" ...
	I1211 23:55:09.178647  273363 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-680529" in "kube-system" namespace to be "Ready" ...
	I1211 23:55:09.194866  273363 pod_ready.go:93] pod "etcd-addons-680529" in "kube-system" namespace has status "Ready":"True"
	I1211 23:55:09.194929  273363 pod_ready.go:82] duration metric: took 16.261201ms for pod "etcd-addons-680529" in "kube-system" namespace to be "Ready" ...
	I1211 23:55:09.194967  273363 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-680529" in "kube-system" namespace to be "Ready" ...
	I1211 23:55:09.209691  273363 pod_ready.go:93] pod "kube-apiserver-addons-680529" in "kube-system" namespace has status "Ready":"True"
	I1211 23:55:09.209762  273363 pod_ready.go:82] duration metric: took 14.774718ms for pod "kube-apiserver-addons-680529" in "kube-system" namespace to be "Ready" ...
	I1211 23:55:09.209790  273363 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-680529" in "kube-system" namespace to be "Ready" ...
	I1211 23:55:09.243158  273363 pod_ready.go:93] pod "kube-controller-manager-addons-680529" in "kube-system" namespace has status "Ready":"True"
	I1211 23:55:09.243231  273363 pod_ready.go:82] duration metric: took 33.418905ms for pod "kube-controller-manager-addons-680529" in "kube-system" namespace to be "Ready" ...
	I1211 23:55:09.243262  273363 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rl6lb" in "kube-system" namespace to be "Ready" ...
	I1211 23:55:09.264589  273363 pod_ready.go:93] pod "kube-proxy-rl6lb" in "kube-system" namespace has status "Ready":"True"
	I1211 23:55:09.264662  273363 pod_ready.go:82] duration metric: took 21.377089ms for pod "kube-proxy-rl6lb" in "kube-system" namespace to be "Ready" ...
	I1211 23:55:09.264690  273363 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-680529" in "kube-system" namespace to be "Ready" ...
	I1211 23:55:09.396632  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:09.575937  273363 pod_ready.go:93] pod "kube-scheduler-addons-680529" in "kube-system" namespace has status "Ready":"True"
	I1211 23:55:09.576018  273363 pod_ready.go:82] duration metric: took 311.291529ms for pod "kube-scheduler-addons-680529" in "kube-system" namespace to be "Ready" ...
	I1211 23:55:09.576045  273363 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace to be "Ready" ...
	I1211 23:55:09.621105  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:55:09.634365  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:09.640428  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:09.896149  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:10.121096  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:55:10.131093  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:10.136731  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:10.396091  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:10.620832  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:55:10.631456  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:10.635428  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:10.896134  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:11.120879  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:55:11.131912  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:11.136058  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:11.395856  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:11.582046  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:55:11.620837  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:55:11.631037  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:11.635342  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:11.896572  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:12.121359  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:55:12.134937  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:12.140495  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:12.396503  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:12.620923  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:55:12.631865  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:12.636593  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:12.897163  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:13.121044  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:55:13.136015  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:13.144241  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:13.396658  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:13.583324  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:55:13.625543  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:55:13.634269  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:13.637736  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:13.896839  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:14.121128  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:55:14.131466  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:14.135210  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:14.396263  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:14.621051  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:55:14.630983  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:14.634895  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:14.895697  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:15.122298  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:55:15.132827  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:15.134816  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:15.395508  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:15.584552  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:55:15.621041  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:55:15.631816  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:15.635355  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:15.896864  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:16.124550  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:55:16.134000  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:16.137950  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:16.396133  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:16.620508  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:55:16.632736  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:16.635382  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:16.896539  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:17.122608  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:55:17.133282  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:17.137667  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:17.396323  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:17.624040  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:55:17.632563  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:17.636804  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:17.895514  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:18.083147  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:55:18.120295  273363 kapi.go:107] duration metric: took 52.003846663s to wait for kubernetes.io/minikube-addons=registry ...
	I1211 23:55:18.132790  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:18.141601  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:18.396753  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:18.631791  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:18.635945  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:18.895468  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:19.132359  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:19.137655  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:19.395717  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:19.631072  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:19.636841  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:19.896608  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:20.087274  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:55:20.153725  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:20.162096  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:20.401673  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:20.632324  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:20.637202  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:20.896506  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:21.132073  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:21.137100  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:21.397217  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:21.632639  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:21.637339  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:21.896202  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:22.132690  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:22.137591  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:22.396719  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:22.582839  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:55:22.631964  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:22.636999  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:22.895834  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:23.134279  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:23.136927  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:23.395713  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:23.664268  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:23.666703  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:23.895423  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:24.142916  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:24.144572  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:24.396665  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:24.591826  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:55:24.634005  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:24.637388  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:24.897265  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:25.144113  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:25.145968  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:25.395595  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:25.632617  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:25.636674  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:25.897051  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:26.132669  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:26.139108  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:26.397244  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:26.635974  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:26.638312  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:26.898697  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:27.083309  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:55:27.134029  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:27.137572  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:27.396574  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:27.632587  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:27.637131  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:27.895751  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:28.138311  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:28.138574  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:28.396374  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:28.632427  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:28.635434  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:28.896313  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:29.085151  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:55:29.138518  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:29.139826  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:29.396503  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:29.631145  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:29.634822  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:29.895453  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:30.134801  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:30.136071  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:30.396290  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:30.631385  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:30.634891  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:30.895348  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:31.087385  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:55:31.142570  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:31.144381  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:31.395963  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:31.633646  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:31.636976  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:31.896229  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:32.135720  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:32.136703  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:32.398858  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:32.631867  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:32.635462  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:32.896336  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:33.131161  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:33.140945  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:33.401245  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:33.584322  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:55:33.632242  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:33.636504  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:33.897106  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:34.131997  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:34.135784  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:34.397224  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:34.632733  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:34.639843  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:34.895514  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:35.133870  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:35.139052  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:35.396080  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:35.633814  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:35.637614  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:35.896894  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:36.084639  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:55:36.135211  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:36.139177  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:36.396278  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:36.631381  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:36.635037  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:36.895965  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:37.132628  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:37.135218  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:37.398897  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:37.631864  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:37.635392  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:37.895480  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:38.132634  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:38.136649  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:38.396257  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:38.582561  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:55:38.631258  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:38.634839  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:38.895157  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:39.131520  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:39.135213  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:39.396289  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:39.631228  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:39.635124  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:39.895746  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:40.133367  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:40.136684  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:40.396465  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:40.582857  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:55:40.631633  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:40.635786  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:40.895564  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:41.131784  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:41.135830  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:41.395667  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:41.631227  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:41.635562  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:41.900915  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:42.132172  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:42.137643  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:42.395944  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:42.632339  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:42.636383  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:42.895801  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:43.083634  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:55:43.140743  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:43.146449  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:43.396028  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:43.637659  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:43.640638  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:43.896595  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:44.132143  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:44.136663  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:44.395210  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:44.635819  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:44.646722  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:44.931645  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:45.103273  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:55:45.135988  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:45.139712  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:45.397709  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:45.631286  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:45.635154  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:45.896978  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:46.136766  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:46.142791  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:46.398377  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:46.632088  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:46.635845  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:46.895591  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:47.133953  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:47.138828  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:47.398329  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:47.583338  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:55:47.640165  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:47.641616  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:47.897161  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:48.134111  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:48.141106  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:48.395872  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:48.640522  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:48.640706  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:48.896514  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:49.131704  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:49.135671  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:49.395492  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:49.631896  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:49.637149  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:49.895729  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:50.085771  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:55:50.132940  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:50.136859  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:50.396638  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:50.632719  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:50.637217  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:50.895428  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:51.135669  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:51.138548  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:51.404054  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:51.635176  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:51.644240  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:51.896435  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:52.134070  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:52.138537  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:52.396210  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:52.584070  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:55:52.636401  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:52.636819  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:52.896723  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:53.159218  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:53.161064  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:53.398009  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:53.641328  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:53.643514  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:53.897759  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:54.140570  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:54.144117  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:54.395853  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:54.631867  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:54.635361  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:54.895826  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:55.082715  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:55:55.131804  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:55.136392  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:55.395793  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:55.630739  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:55.634938  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:55.895438  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:56.134975  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:56.136914  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:56.395475  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:56.634291  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:56.640688  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:56.895596  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:57.084031  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:55:57.133679  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:57.143869  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:57.396429  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:57.632015  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:57.637950  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:57.896452  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:58.137125  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:58.139420  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:58.396912  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:58.632294  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:58.635490  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:58.896409  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:59.084956  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:55:59.132096  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:59.136692  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:59.396167  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:55:59.633579  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:55:59.637498  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:55:59.896063  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:00.146644  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:00.187102  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:00.473739  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:00.632016  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:00.635042  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:00.896185  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:01.132267  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:01.136868  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:01.414472  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:01.588816  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:56:01.645412  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:01.647636  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:01.896400  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:02.132737  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:02.134947  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:02.395625  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:02.632536  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:02.641942  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:02.895870  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:03.133984  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:03.140756  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:03.398209  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:03.633254  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:03.637757  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:03.900000  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:04.089146  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:56:04.133315  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:04.135364  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:04.395916  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:04.638903  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:04.639185  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:04.895881  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:05.131842  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:05.135828  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:05.395745  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:05.634899  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:05.638829  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:05.913179  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:06.133113  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:06.136977  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:06.396179  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:06.582374  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:56:06.632634  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:06.635435  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:06.896867  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:07.138792  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:07.139203  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:07.395646  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:07.631988  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:07.636612  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:07.897366  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:08.132152  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:08.137544  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:08.396290  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:08.586982  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:56:08.632454  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:08.634730  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:08.895870  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:09.133165  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:09.138520  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:09.396612  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:09.634562  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:09.638392  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:09.896173  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:10.141117  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:10.148775  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:10.396081  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:10.635461  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:10.640211  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:10.896460  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:11.083749  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:56:11.132889  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:11.136334  273363 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:56:11.396154  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:11.632088  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:11.636469  273363 kapi.go:107] duration metric: took 1m44.505789982s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1211 23:56:11.896334  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:12.131162  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:12.396076  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:12.633019  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:12.897604  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:13.133443  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:13.395651  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:13.587136  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:56:13.635324  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:13.895920  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:14.132162  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:14.395081  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:56:14.632037  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:14.896173  273363 kapi.go:107] duration metric: took 1m44.00425351s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1211 23:56:14.899175  273363 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-680529 cluster.
	I1211 23:56:14.901743  273363 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1211 23:56:14.904248  273363 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1211 23:56:15.132518  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:15.633124  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:16.082895  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:56:16.133547  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:16.638717  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:17.131958  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:17.631831  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:18.083132  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:56:18.146998  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:18.632988  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:19.132663  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:19.631743  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:20.085326  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:56:20.140971  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:20.632160  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:21.132575  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:21.631011  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:22.132447  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:22.583409  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:56:22.635209  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:23.132783  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:23.631673  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:24.132455  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:24.584094  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:56:24.632875  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:25.134062  273363 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:56:25.632006  273363 kapi.go:107] duration metric: took 1m58.006010063s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1211 23:56:25.633960  273363 out.go:177] * Enabled addons: cloud-spanner, storage-provisioner, amd-gpu-device-plugin, ingress-dns, nvidia-device-plugin, storage-provisioner-rancher, inspektor-gadget, metrics-server, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1211 23:56:25.635279  273363 addons.go:510] duration metric: took 2m5.775875639s for enable addons: enabled=[cloud-spanner storage-provisioner amd-gpu-device-plugin ingress-dns nvidia-device-plugin storage-provisioner-rancher inspektor-gadget metrics-server yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1211 23:56:27.082986  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:56:29.083043  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:56:31.583162  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:56:34.082432  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:56:36.083941  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:56:38.582349  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:56:40.583491  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:56:43.083543  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:56:45.086045  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:56:47.582507  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:56:49.582915  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:56:52.083047  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:56:54.083130  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:56:56.086544  273363 pod_ready.go:103] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"False"
	I1211 23:56:58.083614  273363 pod_ready.go:93] pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace has status "Ready":"True"
	I1211 23:56:58.083645  273363 pod_ready.go:82] duration metric: took 1m48.507571525s for pod "metrics-server-84c5f94fbc-c68dp" in "kube-system" namespace to be "Ready" ...
	I1211 23:56:58.083658  273363 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-pcmmw" in "kube-system" namespace to be "Ready" ...
	I1211 23:56:58.096690  273363 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-pcmmw" in "kube-system" namespace has status "Ready":"True"
	I1211 23:56:58.096732  273363 pod_ready.go:82] duration metric: took 13.05836ms for pod "nvidia-device-plugin-daemonset-pcmmw" in "kube-system" namespace to be "Ready" ...
	I1211 23:56:58.096757  273363 pod_ready.go:39] duration metric: took 1m50.952832861s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1211 23:56:58.096778  273363 api_server.go:52] waiting for apiserver process to appear ...
	I1211 23:56:58.096816  273363 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 23:56:58.096899  273363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 23:56:58.166276  273363 cri.go:89] found id: "a7c5aafc3840bb42186d9aea30a256badd54fa94e27ee9566d8805b029ea85a9"
	I1211 23:56:58.166306  273363 cri.go:89] found id: ""
	I1211 23:56:58.166315  273363 logs.go:282] 1 containers: [a7c5aafc3840bb42186d9aea30a256badd54fa94e27ee9566d8805b029ea85a9]
	I1211 23:56:58.166372  273363 ssh_runner.go:195] Run: which crictl
	I1211 23:56:58.170754  273363 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 23:56:58.170830  273363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 23:56:58.209581  273363 cri.go:89] found id: "df37df1745de7383f027e1f0be0f193bd67d66fbef8865d42ff03f2555701a30"
	I1211 23:56:58.209605  273363 cri.go:89] found id: ""
	I1211 23:56:58.209614  273363 logs.go:282] 1 containers: [df37df1745de7383f027e1f0be0f193bd67d66fbef8865d42ff03f2555701a30]
	I1211 23:56:58.209671  273363 ssh_runner.go:195] Run: which crictl
	I1211 23:56:58.213465  273363 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 23:56:58.213592  273363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 23:56:58.265032  273363 cri.go:89] found id: "a3933957bd198371a01dc1552aa688ea7af1eef840091608a7b42cfed0079b1f"
	I1211 23:56:58.265098  273363 cri.go:89] found id: ""
	I1211 23:56:58.265120  273363 logs.go:282] 1 containers: [a3933957bd198371a01dc1552aa688ea7af1eef840091608a7b42cfed0079b1f]
	I1211 23:56:58.265203  273363 ssh_runner.go:195] Run: which crictl
	I1211 23:56:58.268819  273363 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 23:56:58.268935  273363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 23:56:58.307716  273363 cri.go:89] found id: "a8aa020a72093940be9978a746548762becd42c86b1a2fbbcec72c604d118bb8"
	I1211 23:56:58.307782  273363 cri.go:89] found id: ""
	I1211 23:56:58.307805  273363 logs.go:282] 1 containers: [a8aa020a72093940be9978a746548762becd42c86b1a2fbbcec72c604d118bb8]
	I1211 23:56:58.307921  273363 ssh_runner.go:195] Run: which crictl
	I1211 23:56:58.311994  273363 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 23:56:58.312140  273363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 23:56:58.350576  273363 cri.go:89] found id: "f5b9aebd301a382d4a983705cfb7c3bf32fe23eaa2d4a2e7438cad2251148c73"
	I1211 23:56:58.350613  273363 cri.go:89] found id: ""
	I1211 23:56:58.350622  273363 logs.go:282] 1 containers: [f5b9aebd301a382d4a983705cfb7c3bf32fe23eaa2d4a2e7438cad2251148c73]
	I1211 23:56:58.350713  273363 ssh_runner.go:195] Run: which crictl
	I1211 23:56:58.354323  273363 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 23:56:58.354398  273363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 23:56:58.396270  273363 cri.go:89] found id: "b109b488cf6dcfed2ad5d0f9c0b38e27f828da9b0aee40471d360272905098d7"
	I1211 23:56:58.396294  273363 cri.go:89] found id: ""
	I1211 23:56:58.396303  273363 logs.go:282] 1 containers: [b109b488cf6dcfed2ad5d0f9c0b38e27f828da9b0aee40471d360272905098d7]
	I1211 23:56:58.396367  273363 ssh_runner.go:195] Run: which crictl
	I1211 23:56:58.400310  273363 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 23:56:58.400421  273363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 23:56:58.439413  273363 cri.go:89] found id: "d03a9536de2615711e6aafce6d843bca16d7fde5a0440f9e84a55e33b7f3e2b5"
	I1211 23:56:58.439436  273363 cri.go:89] found id: ""
	I1211 23:56:58.439444  273363 logs.go:282] 1 containers: [d03a9536de2615711e6aafce6d843bca16d7fde5a0440f9e84a55e33b7f3e2b5]
	I1211 23:56:58.439500  273363 ssh_runner.go:195] Run: which crictl
	I1211 23:56:58.443103  273363 logs.go:123] Gathering logs for kube-scheduler [a8aa020a72093940be9978a746548762becd42c86b1a2fbbcec72c604d118bb8] ...
	I1211 23:56:58.443128  273363 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8aa020a72093940be9978a746548762becd42c86b1a2fbbcec72c604d118bb8"
	I1211 23:56:58.497076  273363 logs.go:123] Gathering logs for kube-controller-manager [b109b488cf6dcfed2ad5d0f9c0b38e27f828da9b0aee40471d360272905098d7] ...
	I1211 23:56:58.497110  273363 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b109b488cf6dcfed2ad5d0f9c0b38e27f828da9b0aee40471d360272905098d7"
	I1211 23:56:58.571113  273363 logs.go:123] Gathering logs for kindnet [d03a9536de2615711e6aafce6d843bca16d7fde5a0440f9e84a55e33b7f3e2b5] ...
	I1211 23:56:58.571152  273363 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d03a9536de2615711e6aafce6d843bca16d7fde5a0440f9e84a55e33b7f3e2b5"
	I1211 23:56:58.614912  273363 logs.go:123] Gathering logs for dmesg ...
	I1211 23:56:58.614948  273363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 23:56:58.633232  273363 logs.go:123] Gathering logs for etcd [df37df1745de7383f027e1f0be0f193bd67d66fbef8865d42ff03f2555701a30] ...
	I1211 23:56:58.633261  273363 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df37df1745de7383f027e1f0be0f193bd67d66fbef8865d42ff03f2555701a30"
	I1211 23:56:58.684598  273363 logs.go:123] Gathering logs for kube-apiserver [a7c5aafc3840bb42186d9aea30a256badd54fa94e27ee9566d8805b029ea85a9] ...
	I1211 23:56:58.684631  273363 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a7c5aafc3840bb42186d9aea30a256badd54fa94e27ee9566d8805b029ea85a9"
	I1211 23:56:58.742475  273363 logs.go:123] Gathering logs for coredns [a3933957bd198371a01dc1552aa688ea7af1eef840091608a7b42cfed0079b1f] ...
	I1211 23:56:58.742525  273363 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a3933957bd198371a01dc1552aa688ea7af1eef840091608a7b42cfed0079b1f"
	I1211 23:56:58.794029  273363 logs.go:123] Gathering logs for kube-proxy [f5b9aebd301a382d4a983705cfb7c3bf32fe23eaa2d4a2e7438cad2251148c73] ...
	I1211 23:56:58.794062  273363 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5b9aebd301a382d4a983705cfb7c3bf32fe23eaa2d4a2e7438cad2251148c73"
	I1211 23:56:58.843104  273363 logs.go:123] Gathering logs for CRI-O ...
	I1211 23:56:58.843130  273363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 23:56:58.939616  273363 logs.go:123] Gathering logs for container status ...
	I1211 23:56:58.939655  273363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 23:56:58.997188  273363 logs.go:123] Gathering logs for kubelet ...
	I1211 23:56:58.997216  273363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1211 23:56:59.079449  273363 logs.go:138] Found kubelet problem: Dec 11 23:55:07 addons-680529 kubelet[1516]: W1211 23:55:07.069258    1516 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-680529" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-680529' and this object
	W1211 23:56:59.079684  273363 logs.go:138] Found kubelet problem: Dec 11 23:55:07 addons-680529 kubelet[1516]: E1211 23:55:07.069313    1516 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-680529\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-680529' and this object" logger="UnhandledError"
	W1211 23:56:59.079866  273363 logs.go:138] Found kubelet problem: Dec 11 23:55:07 addons-680529 kubelet[1516]: W1211 23:55:07.104848    1516 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-680529" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-680529' and this object
	W1211 23:56:59.080089  273363 logs.go:138] Found kubelet problem: Dec 11 23:55:07 addons-680529 kubelet[1516]: E1211 23:55:07.104893    1516 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-680529\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-680529' and this object" logger="UnhandledError"
	W1211 23:56:59.080258  273363 logs.go:138] Found kubelet problem: Dec 11 23:55:07 addons-680529 kubelet[1516]: W1211 23:55:07.106417    1516 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-680529" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-680529' and this object
	W1211 23:56:59.080464  273363 logs.go:138] Found kubelet problem: Dec 11 23:55:07 addons-680529 kubelet[1516]: E1211 23:55:07.106469    1516 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-680529\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-680529' and this object" logger="UnhandledError"
	I1211 23:56:59.117935  273363 logs.go:123] Gathering logs for describe nodes ...
	I1211 23:56:59.117968  273363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 23:56:59.315413  273363 out.go:358] Setting ErrFile to fd 2...
	I1211 23:56:59.315441  273363 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1211 23:56:59.315500  273363 out.go:270] X Problems detected in kubelet:
	W1211 23:56:59.315513  273363 out.go:270]   Dec 11 23:55:07 addons-680529 kubelet[1516]: E1211 23:55:07.069313    1516 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-680529\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-680529' and this object" logger="UnhandledError"
	W1211 23:56:59.315521  273363 out.go:270]   Dec 11 23:55:07 addons-680529 kubelet[1516]: W1211 23:55:07.104848    1516 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-680529" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-680529' and this object
	W1211 23:56:59.315538  273363 out.go:270]   Dec 11 23:55:07 addons-680529 kubelet[1516]: E1211 23:55:07.104893    1516 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-680529\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-680529' and this object" logger="UnhandledError"
	W1211 23:56:59.315544  273363 out.go:270]   Dec 11 23:55:07 addons-680529 kubelet[1516]: W1211 23:55:07.106417    1516 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-680529" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-680529' and this object
	W1211 23:56:59.315557  273363 out.go:270]   Dec 11 23:55:07 addons-680529 kubelet[1516]: E1211 23:55:07.106469    1516 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-680529\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-680529' and this object" logger="UnhandledError"
	I1211 23:56:59.315565  273363 out.go:358] Setting ErrFile to fd 2...
	I1211 23:56:59.315571  273363 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 23:57:09.317176  273363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 23:57:09.331280  273363 api_server.go:72] duration metric: took 2m49.472318402s to wait for apiserver process to appear ...
	I1211 23:57:09.331308  273363 api_server.go:88] waiting for apiserver healthz status ...
	I1211 23:57:09.331343  273363 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 23:57:09.331402  273363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 23:57:09.378597  273363 cri.go:89] found id: "a7c5aafc3840bb42186d9aea30a256badd54fa94e27ee9566d8805b029ea85a9"
	I1211 23:57:09.378623  273363 cri.go:89] found id: ""
	I1211 23:57:09.378631  273363 logs.go:282] 1 containers: [a7c5aafc3840bb42186d9aea30a256badd54fa94e27ee9566d8805b029ea85a9]
	I1211 23:57:09.378689  273363 ssh_runner.go:195] Run: which crictl
	I1211 23:57:09.382269  273363 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 23:57:09.382343  273363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 23:57:09.423129  273363 cri.go:89] found id: "df37df1745de7383f027e1f0be0f193bd67d66fbef8865d42ff03f2555701a30"
	I1211 23:57:09.423150  273363 cri.go:89] found id: ""
	I1211 23:57:09.423158  273363 logs.go:282] 1 containers: [df37df1745de7383f027e1f0be0f193bd67d66fbef8865d42ff03f2555701a30]
	I1211 23:57:09.423216  273363 ssh_runner.go:195] Run: which crictl
	I1211 23:57:09.427199  273363 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 23:57:09.427272  273363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 23:57:09.467492  273363 cri.go:89] found id: "a3933957bd198371a01dc1552aa688ea7af1eef840091608a7b42cfed0079b1f"
	I1211 23:57:09.467516  273363 cri.go:89] found id: ""
	I1211 23:57:09.467525  273363 logs.go:282] 1 containers: [a3933957bd198371a01dc1552aa688ea7af1eef840091608a7b42cfed0079b1f]
	I1211 23:57:09.467582  273363 ssh_runner.go:195] Run: which crictl
	I1211 23:57:09.471293  273363 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 23:57:09.471370  273363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 23:57:09.513018  273363 cri.go:89] found id: "a8aa020a72093940be9978a746548762becd42c86b1a2fbbcec72c604d118bb8"
	I1211 23:57:09.513037  273363 cri.go:89] found id: ""
	I1211 23:57:09.513045  273363 logs.go:282] 1 containers: [a8aa020a72093940be9978a746548762becd42c86b1a2fbbcec72c604d118bb8]
	I1211 23:57:09.513102  273363 ssh_runner.go:195] Run: which crictl
	I1211 23:57:09.516829  273363 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 23:57:09.516901  273363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 23:57:09.559664  273363 cri.go:89] found id: "f5b9aebd301a382d4a983705cfb7c3bf32fe23eaa2d4a2e7438cad2251148c73"
	I1211 23:57:09.559683  273363 cri.go:89] found id: ""
	I1211 23:57:09.559691  273363 logs.go:282] 1 containers: [f5b9aebd301a382d4a983705cfb7c3bf32fe23eaa2d4a2e7438cad2251148c73]
	I1211 23:57:09.559745  273363 ssh_runner.go:195] Run: which crictl
	I1211 23:57:09.564724  273363 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 23:57:09.564821  273363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 23:57:09.608178  273363 cri.go:89] found id: "b109b488cf6dcfed2ad5d0f9c0b38e27f828da9b0aee40471d360272905098d7"
	I1211 23:57:09.608202  273363 cri.go:89] found id: ""
	I1211 23:57:09.608211  273363 logs.go:282] 1 containers: [b109b488cf6dcfed2ad5d0f9c0b38e27f828da9b0aee40471d360272905098d7]
	I1211 23:57:09.608269  273363 ssh_runner.go:195] Run: which crictl
	I1211 23:57:09.612621  273363 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 23:57:09.612726  273363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 23:57:09.670991  273363 cri.go:89] found id: "d03a9536de2615711e6aafce6d843bca16d7fde5a0440f9e84a55e33b7f3e2b5"
	I1211 23:57:09.671015  273363 cri.go:89] found id: ""
	I1211 23:57:09.671023  273363 logs.go:282] 1 containers: [d03a9536de2615711e6aafce6d843bca16d7fde5a0440f9e84a55e33b7f3e2b5]
	I1211 23:57:09.671084  273363 ssh_runner.go:195] Run: which crictl
	I1211 23:57:09.674493  273363 logs.go:123] Gathering logs for kube-controller-manager [b109b488cf6dcfed2ad5d0f9c0b38e27f828da9b0aee40471d360272905098d7] ...
	I1211 23:57:09.674521  273363 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b109b488cf6dcfed2ad5d0f9c0b38e27f828da9b0aee40471d360272905098d7"
	I1211 23:57:09.742051  273363 logs.go:123] Gathering logs for CRI-O ...
	I1211 23:57:09.742090  273363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 23:57:09.832554  273363 logs.go:123] Gathering logs for describe nodes ...
	I1211 23:57:09.832593  273363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 23:57:09.969424  273363 logs.go:123] Gathering logs for kube-apiserver [a7c5aafc3840bb42186d9aea30a256badd54fa94e27ee9566d8805b029ea85a9] ...
	I1211 23:57:09.969455  273363 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a7c5aafc3840bb42186d9aea30a256badd54fa94e27ee9566d8805b029ea85a9"
	I1211 23:57:10.043312  273363 logs.go:123] Gathering logs for kube-proxy [f5b9aebd301a382d4a983705cfb7c3bf32fe23eaa2d4a2e7438cad2251148c73] ...
	I1211 23:57:10.043354  273363 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5b9aebd301a382d4a983705cfb7c3bf32fe23eaa2d4a2e7438cad2251148c73"
	I1211 23:57:10.087181  273363 logs.go:123] Gathering logs for coredns [a3933957bd198371a01dc1552aa688ea7af1eef840091608a7b42cfed0079b1f] ...
	I1211 23:57:10.087213  273363 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a3933957bd198371a01dc1552aa688ea7af1eef840091608a7b42cfed0079b1f"
	I1211 23:57:10.145118  273363 logs.go:123] Gathering logs for kube-scheduler [a8aa020a72093940be9978a746548762becd42c86b1a2fbbcec72c604d118bb8] ...
	I1211 23:57:10.145154  273363 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8aa020a72093940be9978a746548762becd42c86b1a2fbbcec72c604d118bb8"
	I1211 23:57:10.208039  273363 logs.go:123] Gathering logs for kindnet [d03a9536de2615711e6aafce6d843bca16d7fde5a0440f9e84a55e33b7f3e2b5] ...
	I1211 23:57:10.208075  273363 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d03a9536de2615711e6aafce6d843bca16d7fde5a0440f9e84a55e33b7f3e2b5"
	I1211 23:57:10.254205  273363 logs.go:123] Gathering logs for container status ...
	I1211 23:57:10.254236  273363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 23:57:10.304877  273363 logs.go:123] Gathering logs for kubelet ...
	I1211 23:57:10.304907  273363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1211 23:57:10.382798  273363 logs.go:138] Found kubelet problem: Dec 11 23:55:07 addons-680529 kubelet[1516]: W1211 23:55:07.069258    1516 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-680529" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-680529' and this object
	W1211 23:57:10.383065  273363 logs.go:138] Found kubelet problem: Dec 11 23:55:07 addons-680529 kubelet[1516]: E1211 23:55:07.069313    1516 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-680529\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-680529' and this object" logger="UnhandledError"
	W1211 23:57:10.383249  273363 logs.go:138] Found kubelet problem: Dec 11 23:55:07 addons-680529 kubelet[1516]: W1211 23:55:07.104848    1516 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-680529" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-680529' and this object
	W1211 23:57:10.383471  273363 logs.go:138] Found kubelet problem: Dec 11 23:55:07 addons-680529 kubelet[1516]: E1211 23:55:07.104893    1516 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-680529\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-680529' and this object" logger="UnhandledError"
	W1211 23:57:10.383635  273363 logs.go:138] Found kubelet problem: Dec 11 23:55:07 addons-680529 kubelet[1516]: W1211 23:55:07.106417    1516 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-680529" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-680529' and this object
	W1211 23:57:10.383841  273363 logs.go:138] Found kubelet problem: Dec 11 23:55:07 addons-680529 kubelet[1516]: E1211 23:55:07.106469    1516 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-680529\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-680529' and this object" logger="UnhandledError"
	I1211 23:57:10.421815  273363 logs.go:123] Gathering logs for dmesg ...
	I1211 23:57:10.421849  273363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 23:57:10.438800  273363 logs.go:123] Gathering logs for etcd [df37df1745de7383f027e1f0be0f193bd67d66fbef8865d42ff03f2555701a30] ...
	I1211 23:57:10.438873  273363 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df37df1745de7383f027e1f0be0f193bd67d66fbef8865d42ff03f2555701a30"
	I1211 23:57:10.504585  273363 out.go:358] Setting ErrFile to fd 2...
	I1211 23:57:10.504621  273363 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1211 23:57:10.504709  273363 out.go:270] X Problems detected in kubelet:
	W1211 23:57:10.504724  273363 out.go:270]   Dec 11 23:55:07 addons-680529 kubelet[1516]: E1211 23:55:07.069313    1516 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-680529\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-680529' and this object" logger="UnhandledError"
	W1211 23:57:10.504732  273363 out.go:270]   Dec 11 23:55:07 addons-680529 kubelet[1516]: W1211 23:55:07.104848    1516 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-680529" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-680529' and this object
	W1211 23:57:10.504754  273363 out.go:270]   Dec 11 23:55:07 addons-680529 kubelet[1516]: E1211 23:55:07.104893    1516 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-680529\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-680529' and this object" logger="UnhandledError"
	W1211 23:57:10.504764  273363 out.go:270]   Dec 11 23:55:07 addons-680529 kubelet[1516]: W1211 23:55:07.106417    1516 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-680529" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-680529' and this object
	W1211 23:57:10.504770  273363 out.go:270]   Dec 11 23:55:07 addons-680529 kubelet[1516]: E1211 23:55:07.106469    1516 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-680529\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-680529' and this object" logger="UnhandledError"
	I1211 23:57:10.504781  273363 out.go:358] Setting ErrFile to fd 2...
	I1211 23:57:10.504787  273363 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 23:57:20.506748  273363 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1211 23:57:20.515102  273363 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1211 23:57:20.516136  273363 api_server.go:141] control plane version: v1.31.2
	I1211 23:57:20.516162  273363 api_server.go:131] duration metric: took 11.184846506s to wait for apiserver health ...
	I1211 23:57:20.516172  273363 system_pods.go:43] waiting for kube-system pods to appear ...
	I1211 23:57:20.516193  273363 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 23:57:20.516257  273363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 23:57:20.556860  273363 cri.go:89] found id: "a7c5aafc3840bb42186d9aea30a256badd54fa94e27ee9566d8805b029ea85a9"
	I1211 23:57:20.556885  273363 cri.go:89] found id: ""
	I1211 23:57:20.556893  273363 logs.go:282] 1 containers: [a7c5aafc3840bb42186d9aea30a256badd54fa94e27ee9566d8805b029ea85a9]
	I1211 23:57:20.556953  273363 ssh_runner.go:195] Run: which crictl
	I1211 23:57:20.560462  273363 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 23:57:20.560539  273363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 23:57:20.598091  273363 cri.go:89] found id: "df37df1745de7383f027e1f0be0f193bd67d66fbef8865d42ff03f2555701a30"
	I1211 23:57:20.598115  273363 cri.go:89] found id: ""
	I1211 23:57:20.598123  273363 logs.go:282] 1 containers: [df37df1745de7383f027e1f0be0f193bd67d66fbef8865d42ff03f2555701a30]
	I1211 23:57:20.598204  273363 ssh_runner.go:195] Run: which crictl
	I1211 23:57:20.601847  273363 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 23:57:20.601925  273363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 23:57:20.644333  273363 cri.go:89] found id: "a3933957bd198371a01dc1552aa688ea7af1eef840091608a7b42cfed0079b1f"
	I1211 23:57:20.644356  273363 cri.go:89] found id: ""
	I1211 23:57:20.644365  273363 logs.go:282] 1 containers: [a3933957bd198371a01dc1552aa688ea7af1eef840091608a7b42cfed0079b1f]
	I1211 23:57:20.644422  273363 ssh_runner.go:195] Run: which crictl
	I1211 23:57:20.648306  273363 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 23:57:20.648383  273363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 23:57:20.687325  273363 cri.go:89] found id: "a8aa020a72093940be9978a746548762becd42c86b1a2fbbcec72c604d118bb8"
	I1211 23:57:20.687350  273363 cri.go:89] found id: ""
	I1211 23:57:20.687358  273363 logs.go:282] 1 containers: [a8aa020a72093940be9978a746548762becd42c86b1a2fbbcec72c604d118bb8]
	I1211 23:57:20.687418  273363 ssh_runner.go:195] Run: which crictl
	I1211 23:57:20.691075  273363 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 23:57:20.691166  273363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 23:57:20.731502  273363 cri.go:89] found id: "f5b9aebd301a382d4a983705cfb7c3bf32fe23eaa2d4a2e7438cad2251148c73"
	I1211 23:57:20.731528  273363 cri.go:89] found id: ""
	I1211 23:57:20.731537  273363 logs.go:282] 1 containers: [f5b9aebd301a382d4a983705cfb7c3bf32fe23eaa2d4a2e7438cad2251148c73]
	I1211 23:57:20.731596  273363 ssh_runner.go:195] Run: which crictl
	I1211 23:57:20.735345  273363 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 23:57:20.735427  273363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 23:57:20.799679  273363 cri.go:89] found id: "b109b488cf6dcfed2ad5d0f9c0b38e27f828da9b0aee40471d360272905098d7"
	I1211 23:57:20.799703  273363 cri.go:89] found id: ""
	I1211 23:57:20.799713  273363 logs.go:282] 1 containers: [b109b488cf6dcfed2ad5d0f9c0b38e27f828da9b0aee40471d360272905098d7]
	I1211 23:57:20.799770  273363 ssh_runner.go:195] Run: which crictl
	I1211 23:57:20.804067  273363 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 23:57:20.804144  273363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 23:57:20.877315  273363 cri.go:89] found id: "d03a9536de2615711e6aafce6d843bca16d7fde5a0440f9e84a55e33b7f3e2b5"
	I1211 23:57:20.877340  273363 cri.go:89] found id: ""
	I1211 23:57:20.877348  273363 logs.go:282] 1 containers: [d03a9536de2615711e6aafce6d843bca16d7fde5a0440f9e84a55e33b7f3e2b5]
	I1211 23:57:20.877406  273363 ssh_runner.go:195] Run: which crictl
	I1211 23:57:20.881138  273363 logs.go:123] Gathering logs for kubelet ...
	I1211 23:57:20.881162  273363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1211 23:57:20.962683  273363 logs.go:138] Found kubelet problem: Dec 11 23:55:07 addons-680529 kubelet[1516]: W1211 23:55:07.069258    1516 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-680529" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-680529' and this object
	W1211 23:57:20.962947  273363 logs.go:138] Found kubelet problem: Dec 11 23:55:07 addons-680529 kubelet[1516]: E1211 23:55:07.069313    1516 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-680529\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-680529' and this object" logger="UnhandledError"
	W1211 23:57:20.963136  273363 logs.go:138] Found kubelet problem: Dec 11 23:55:07 addons-680529 kubelet[1516]: W1211 23:55:07.104848    1516 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-680529" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-680529' and this object
	W1211 23:57:20.963363  273363 logs.go:138] Found kubelet problem: Dec 11 23:55:07 addons-680529 kubelet[1516]: E1211 23:55:07.104893    1516 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-680529\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-680529' and this object" logger="UnhandledError"
	W1211 23:57:20.963528  273363 logs.go:138] Found kubelet problem: Dec 11 23:55:07 addons-680529 kubelet[1516]: W1211 23:55:07.106417    1516 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-680529" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-680529' and this object
	W1211 23:57:20.963732  273363 logs.go:138] Found kubelet problem: Dec 11 23:55:07 addons-680529 kubelet[1516]: E1211 23:55:07.106469    1516 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-680529\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-680529' and this object" logger="UnhandledError"
	I1211 23:57:21.002499  273363 logs.go:123] Gathering logs for describe nodes ...
	I1211 23:57:21.002529  273363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 23:57:21.148920  273363 logs.go:123] Gathering logs for kube-apiserver [a7c5aafc3840bb42186d9aea30a256badd54fa94e27ee9566d8805b029ea85a9] ...
	I1211 23:57:21.148957  273363 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a7c5aafc3840bb42186d9aea30a256badd54fa94e27ee9566d8805b029ea85a9"
	I1211 23:57:21.211526  273363 logs.go:123] Gathering logs for kube-proxy [f5b9aebd301a382d4a983705cfb7c3bf32fe23eaa2d4a2e7438cad2251148c73] ...
	I1211 23:57:21.211560  273363 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5b9aebd301a382d4a983705cfb7c3bf32fe23eaa2d4a2e7438cad2251148c73"
	I1211 23:57:21.253340  273363 logs.go:123] Gathering logs for CRI-O ...
	I1211 23:57:21.253369  273363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 23:57:21.350378  273363 logs.go:123] Gathering logs for kindnet [d03a9536de2615711e6aafce6d843bca16d7fde5a0440f9e84a55e33b7f3e2b5] ...
	I1211 23:57:21.350413  273363 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d03a9536de2615711e6aafce6d843bca16d7fde5a0440f9e84a55e33b7f3e2b5"
	I1211 23:57:21.395975  273363 logs.go:123] Gathering logs for container status ...
	I1211 23:57:21.396002  273363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 23:57:21.445282  273363 logs.go:123] Gathering logs for dmesg ...
	I1211 23:57:21.445311  273363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 23:57:21.461131  273363 logs.go:123] Gathering logs for etcd [df37df1745de7383f027e1f0be0f193bd67d66fbef8865d42ff03f2555701a30] ...
	I1211 23:57:21.461161  273363 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df37df1745de7383f027e1f0be0f193bd67d66fbef8865d42ff03f2555701a30"
	I1211 23:57:21.511518  273363 logs.go:123] Gathering logs for coredns [a3933957bd198371a01dc1552aa688ea7af1eef840091608a7b42cfed0079b1f] ...
	I1211 23:57:21.511553  273363 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a3933957bd198371a01dc1552aa688ea7af1eef840091608a7b42cfed0079b1f"
	I1211 23:57:21.553703  273363 logs.go:123] Gathering logs for kube-scheduler [a8aa020a72093940be9978a746548762becd42c86b1a2fbbcec72c604d118bb8] ...
	I1211 23:57:21.553736  273363 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8aa020a72093940be9978a746548762becd42c86b1a2fbbcec72c604d118bb8"
	I1211 23:57:21.613757  273363 logs.go:123] Gathering logs for kube-controller-manager [b109b488cf6dcfed2ad5d0f9c0b38e27f828da9b0aee40471d360272905098d7] ...
	I1211 23:57:21.613790  273363 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b109b488cf6dcfed2ad5d0f9c0b38e27f828da9b0aee40471d360272905098d7"
	I1211 23:57:21.686035  273363 out.go:358] Setting ErrFile to fd 2...
	I1211 23:57:21.686067  273363 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1211 23:57:21.686125  273363 out.go:270] X Problems detected in kubelet:
	W1211 23:57:21.686151  273363 out.go:270]   Dec 11 23:55:07 addons-680529 kubelet[1516]: E1211 23:55:07.069313    1516 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-680529\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-680529' and this object" logger="UnhandledError"
	W1211 23:57:21.686159  273363 out.go:270]   Dec 11 23:55:07 addons-680529 kubelet[1516]: W1211 23:55:07.104848    1516 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-680529" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-680529' and this object
	W1211 23:57:21.686168  273363 out.go:270]   Dec 11 23:55:07 addons-680529 kubelet[1516]: E1211 23:55:07.104893    1516 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-680529\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-680529' and this object" logger="UnhandledError"
	W1211 23:57:21.686174  273363 out.go:270]   Dec 11 23:55:07 addons-680529 kubelet[1516]: W1211 23:55:07.106417    1516 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-680529" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-680529' and this object
	W1211 23:57:21.686181  273363 out.go:270]   Dec 11 23:55:07 addons-680529 kubelet[1516]: E1211 23:55:07.106469    1516 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-680529\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-680529' and this object" logger="UnhandledError"
	I1211 23:57:21.686192  273363 out.go:358] Setting ErrFile to fd 2...
	I1211 23:57:21.686198  273363 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 23:57:31.699507  273363 system_pods.go:59] 18 kube-system pods found
	I1211 23:57:31.699551  273363 system_pods.go:61] "coredns-7c65d6cfc9-ltfkm" [552c2c98-c09f-4851-86f5-93ea3c60d6b8] Running
	I1211 23:57:31.699559  273363 system_pods.go:61] "csi-hostpath-attacher-0" [1ff2269a-08fe-4383-be8d-d46a2b31efe3] Running
	I1211 23:57:31.699564  273363 system_pods.go:61] "csi-hostpath-resizer-0" [5a0bf1bf-83c1-460e-a41a-9950bfd8c409] Running
	I1211 23:57:31.699568  273363 system_pods.go:61] "csi-hostpathplugin-ltfzd" [472dd4a7-f472-4ea4-a78e-aff7da5aa7d5] Running
	I1211 23:57:31.699572  273363 system_pods.go:61] "etcd-addons-680529" [29c7c556-8282-42d4-8d66-b29a6d066eb7] Running
	I1211 23:57:31.699578  273363 system_pods.go:61] "kindnet-5n8x6" [fa640b02-6bf5-46fd-8c97-9292f66f15bb] Running
	I1211 23:57:31.699582  273363 system_pods.go:61] "kube-apiserver-addons-680529" [261548ca-d14b-4b3f-bffc-b8cc7f62f7cd] Running
	I1211 23:57:31.699586  273363 system_pods.go:61] "kube-controller-manager-addons-680529" [ea32f6bf-c0ae-4080-b39e-64568e70204f] Running
	I1211 23:57:31.699591  273363 system_pods.go:61] "kube-ingress-dns-minikube" [e15ef8b4-426e-4564-b396-6c78ba49bfbf] Running
	I1211 23:57:31.699595  273363 system_pods.go:61] "kube-proxy-rl6lb" [46b9b123-b304-41dc-8f4b-94ede15fd378] Running
	I1211 23:57:31.699600  273363 system_pods.go:61] "kube-scheduler-addons-680529" [f05e82d8-6388-4d24-8ce3-b77be14393b5] Running
	I1211 23:57:31.699604  273363 system_pods.go:61] "metrics-server-84c5f94fbc-c68dp" [09bd89d6-eb8c-4252-ae07-4d3b5b855169] Running
	I1211 23:57:31.699608  273363 system_pods.go:61] "nvidia-device-plugin-daemonset-pcmmw" [165e1834-cab1-404d-bc96-38a766c51940] Running
	I1211 23:57:31.699642  273363 system_pods.go:61] "registry-5cc95cd69-xnkxj" [13f2d3d8-1d08-41f1-80e2-d19e09a1c46d] Running
	I1211 23:57:31.699677  273363 system_pods.go:61] "registry-proxy-f2dfg" [79eadeb8-583a-4e72-87f2-bd4c865a9319] Running
	I1211 23:57:31.699708  273363 system_pods.go:61] "snapshot-controller-56fcc65765-9bmsg" [849b66e3-659e-432e-88d7-97ec947ba293] Running
	I1211 23:57:31.699742  273363 system_pods.go:61] "snapshot-controller-56fcc65765-gcl6n" [b1322253-a509-4079-a8ef-a53886d23acf] Running
	I1211 23:57:31.699771  273363 system_pods.go:61] "storage-provisioner" [a2973b5d-d765-4e68-ad3c-31a62ab3399d] Running
	I1211 23:57:31.699801  273363 system_pods.go:74] duration metric: took 11.183622304s to wait for pod list to return data ...
	I1211 23:57:31.699822  273363 default_sa.go:34] waiting for default service account to be created ...
	I1211 23:57:31.702650  273363 default_sa.go:45] found service account: "default"
	I1211 23:57:31.702678  273363 default_sa.go:55] duration metric: took 2.832258ms for default service account to be created ...
	I1211 23:57:31.702689  273363 system_pods.go:116] waiting for k8s-apps to be running ...
	I1211 23:57:31.714502  273363 system_pods.go:86] 18 kube-system pods found
	I1211 23:57:31.714543  273363 system_pods.go:89] "coredns-7c65d6cfc9-ltfkm" [552c2c98-c09f-4851-86f5-93ea3c60d6b8] Running
	I1211 23:57:31.714557  273363 system_pods.go:89] "csi-hostpath-attacher-0" [1ff2269a-08fe-4383-be8d-d46a2b31efe3] Running
	I1211 23:57:31.714562  273363 system_pods.go:89] "csi-hostpath-resizer-0" [5a0bf1bf-83c1-460e-a41a-9950bfd8c409] Running
	I1211 23:57:31.714568  273363 system_pods.go:89] "csi-hostpathplugin-ltfzd" [472dd4a7-f472-4ea4-a78e-aff7da5aa7d5] Running
	I1211 23:57:31.714573  273363 system_pods.go:89] "etcd-addons-680529" [29c7c556-8282-42d4-8d66-b29a6d066eb7] Running
	I1211 23:57:31.714583  273363 system_pods.go:89] "kindnet-5n8x6" [fa640b02-6bf5-46fd-8c97-9292f66f15bb] Running
	I1211 23:57:31.714591  273363 system_pods.go:89] "kube-apiserver-addons-680529" [261548ca-d14b-4b3f-bffc-b8cc7f62f7cd] Running
	I1211 23:57:31.714597  273363 system_pods.go:89] "kube-controller-manager-addons-680529" [ea32f6bf-c0ae-4080-b39e-64568e70204f] Running
	I1211 23:57:31.714607  273363 system_pods.go:89] "kube-ingress-dns-minikube" [e15ef8b4-426e-4564-b396-6c78ba49bfbf] Running
	I1211 23:57:31.714616  273363 system_pods.go:89] "kube-proxy-rl6lb" [46b9b123-b304-41dc-8f4b-94ede15fd378] Running
	I1211 23:57:31.714624  273363 system_pods.go:89] "kube-scheduler-addons-680529" [f05e82d8-6388-4d24-8ce3-b77be14393b5] Running
	I1211 23:57:31.714631  273363 system_pods.go:89] "metrics-server-84c5f94fbc-c68dp" [09bd89d6-eb8c-4252-ae07-4d3b5b855169] Running
	I1211 23:57:31.714636  273363 system_pods.go:89] "nvidia-device-plugin-daemonset-pcmmw" [165e1834-cab1-404d-bc96-38a766c51940] Running
	I1211 23:57:31.714641  273363 system_pods.go:89] "registry-5cc95cd69-xnkxj" [13f2d3d8-1d08-41f1-80e2-d19e09a1c46d] Running
	I1211 23:57:31.714653  273363 system_pods.go:89] "registry-proxy-f2dfg" [79eadeb8-583a-4e72-87f2-bd4c865a9319] Running
	I1211 23:57:31.714659  273363 system_pods.go:89] "snapshot-controller-56fcc65765-9bmsg" [849b66e3-659e-432e-88d7-97ec947ba293] Running
	I1211 23:57:31.714664  273363 system_pods.go:89] "snapshot-controller-56fcc65765-gcl6n" [b1322253-a509-4079-a8ef-a53886d23acf] Running
	I1211 23:57:31.714668  273363 system_pods.go:89] "storage-provisioner" [a2973b5d-d765-4e68-ad3c-31a62ab3399d] Running
	I1211 23:57:31.714676  273363 system_pods.go:126] duration metric: took 11.981258ms to wait for k8s-apps to be running ...
	I1211 23:57:31.714691  273363 system_svc.go:44] waiting for kubelet service to be running ....
	I1211 23:57:31.714753  273363 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1211 23:57:31.728985  273363 system_svc.go:56] duration metric: took 14.286576ms WaitForService to wait for kubelet
	I1211 23:57:31.729029  273363 kubeadm.go:582] duration metric: took 3m11.8700725s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1211 23:57:31.729054  273363 node_conditions.go:102] verifying NodePressure condition ...
	I1211 23:57:31.732455  273363 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1211 23:57:31.732486  273363 node_conditions.go:123] node cpu capacity is 2
	I1211 23:57:31.732500  273363 node_conditions.go:105] duration metric: took 3.433981ms to run NodePressure ...
	I1211 23:57:31.732514  273363 start.go:241] waiting for startup goroutines ...
	I1211 23:57:31.732521  273363 start.go:246] waiting for cluster config update ...
	I1211 23:57:31.732560  273363 start.go:255] writing updated cluster config ...
	I1211 23:57:31.732859  273363 ssh_runner.go:195] Run: rm -f paused
	I1211 23:57:32.187698  273363 start.go:600] kubectl: 1.32.0, cluster: 1.31.2 (minor skew: 1)
	I1211 23:57:32.189304  273363 out.go:177] * Done! kubectl is now configured to use "addons-680529" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 12 00:02:31 addons-680529 conmon[3164]: conmon 488478d8f00fa9de4d29 <ninfo>: container 3175 exited with status 137
	Dec 12 00:02:31 addons-680529 crio[978]: time="2024-12-12 00:02:31.554309519Z" level=info msg="Stopped container 488478d8f00fa9de4d29600fd65fbbdf4fbdbf937aebd96ad8527ea3e71c9fc3: local-path-storage/local-path-provisioner-86d989889c-jw8w6/local-path-provisioner" id=ed9dba93-476a-4b72-92ef-1525f53e83c0 name=/runtime.v1.RuntimeService/StopContainer
	Dec 12 00:02:31 addons-680529 crio[978]: time="2024-12-12 00:02:31.554857435Z" level=info msg="Stopping pod sandbox: f4194d217e3b52dd268ffef45823d66a602c841357ddd0fc0106b90b6a65977a" id=2adc85b1-48ad-4a22-bb67-c34dd95acc76 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 12 00:02:31 addons-680529 crio[978]: time="2024-12-12 00:02:31.555091099Z" level=info msg="Got pod network &{Name:local-path-provisioner-86d989889c-jw8w6 Namespace:local-path-storage ID:f4194d217e3b52dd268ffef45823d66a602c841357ddd0fc0106b90b6a65977a UID:7ba81e90-bbab-4b18-b71e-16d6815f5836 NetNS:/var/run/netns/d576fd91-7345-4a39-bde1-efc6e5533c68 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Dec 12 00:02:31 addons-680529 crio[978]: time="2024-12-12 00:02:31.555236160Z" level=info msg="Deleting pod local-path-storage_local-path-provisioner-86d989889c-jw8w6 from CNI network \"kindnet\" (type=ptp)"
	Dec 12 00:02:31 addons-680529 crio[978]: time="2024-12-12 00:02:31.572453292Z" level=info msg="Stopped pod sandbox: f4194d217e3b52dd268ffef45823d66a602c841357ddd0fc0106b90b6a65977a" id=2adc85b1-48ad-4a22-bb67-c34dd95acc76 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 12 00:02:31 addons-680529 crio[978]: time="2024-12-12 00:02:31.727534128Z" level=info msg="Removing container: 488478d8f00fa9de4d29600fd65fbbdf4fbdbf937aebd96ad8527ea3e71c9fc3" id=02d8703b-651f-49d2-892f-561b3e0d958b name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 12 00:02:31 addons-680529 crio[978]: time="2024-12-12 00:02:31.751019615Z" level=info msg="Removed container 488478d8f00fa9de4d29600fd65fbbdf4fbdbf937aebd96ad8527ea3e71c9fc3: local-path-storage/local-path-provisioner-86d989889c-jw8w6/local-path-provisioner" id=02d8703b-651f-49d2-892f-561b3e0d958b name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 12 00:02:50 addons-680529 crio[978]: time="2024-12-12 00:02:50.703179093Z" level=info msg="Stopping container: caa2545993e4268e2fa19655dc7da259e67a7dd09530c52eecd1a49d4e6f7ecc (timeout: 30s)" id=85304910-c9a9-44cb-858b-ac5ec38d2ae6 name=/runtime.v1.RuntimeService/StopContainer
	Dec 12 00:02:50 addons-680529 conmon[3999]: conmon caa2545993e4268e2fa1 <ninfo>: container 4010 exited with status 2
	Dec 12 00:02:50 addons-680529 crio[978]: time="2024-12-12 00:02:50.851582533Z" level=info msg="Stopped container caa2545993e4268e2fa19655dc7da259e67a7dd09530c52eecd1a49d4e6f7ecc: default/cloud-spanner-emulator-dc5db94f4-s2gl8/cloud-spanner-emulator" id=85304910-c9a9-44cb-858b-ac5ec38d2ae6 name=/runtime.v1.RuntimeService/StopContainer
	Dec 12 00:02:50 addons-680529 crio[978]: time="2024-12-12 00:02:50.852197590Z" level=info msg="Stopping pod sandbox: 73e881f093594c3a24facc184d2bc80a71fa8dffc0ce8f39176df9eee41b2bd9" id=70dec7fc-0193-414a-b73c-897b8af1e281 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 12 00:02:50 addons-680529 crio[978]: time="2024-12-12 00:02:50.852444768Z" level=info msg="Got pod network &{Name:cloud-spanner-emulator-dc5db94f4-s2gl8 Namespace:default ID:73e881f093594c3a24facc184d2bc80a71fa8dffc0ce8f39176df9eee41b2bd9 UID:f9facd92-c7e6-4d7d-93ca-62c980c74791 NetNS:/var/run/netns/f266ab23-41d8-4497-aff4-7267fa1f39b9 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Dec 12 00:02:50 addons-680529 crio[978]: time="2024-12-12 00:02:50.852587695Z" level=info msg="Deleting pod default_cloud-spanner-emulator-dc5db94f4-s2gl8 from CNI network \"kindnet\" (type=ptp)"
	Dec 12 00:02:50 addons-680529 crio[978]: time="2024-12-12 00:02:50.894313945Z" level=info msg="Stopped pod sandbox: 73e881f093594c3a24facc184d2bc80a71fa8dffc0ce8f39176df9eee41b2bd9" id=70dec7fc-0193-414a-b73c-897b8af1e281 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 12 00:02:51 addons-680529 crio[978]: time="2024-12-12 00:02:51.770916440Z" level=info msg="Removing container: caa2545993e4268e2fa19655dc7da259e67a7dd09530c52eecd1a49d4e6f7ecc" id=471f8960-6af0-490b-a17d-eba592e78726 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 12 00:02:51 addons-680529 crio[978]: time="2024-12-12 00:02:51.788438182Z" level=info msg="Removed container caa2545993e4268e2fa19655dc7da259e67a7dd09530c52eecd1a49d4e6f7ecc: default/cloud-spanner-emulator-dc5db94f4-s2gl8/cloud-spanner-emulator" id=471f8960-6af0-490b-a17d-eba592e78726 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 12 00:03:14 addons-680529 crio[978]: time="2024-12-12 00:03:14.988551687Z" level=info msg="Stopping pod sandbox: 73e881f093594c3a24facc184d2bc80a71fa8dffc0ce8f39176df9eee41b2bd9" id=1d9b3d14-726d-4509-abef-a4cfab5ec168 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 12 00:03:14 addons-680529 crio[978]: time="2024-12-12 00:03:14.988594877Z" level=info msg="Stopped pod sandbox (already stopped): 73e881f093594c3a24facc184d2bc80a71fa8dffc0ce8f39176df9eee41b2bd9" id=1d9b3d14-726d-4509-abef-a4cfab5ec168 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 12 00:03:14 addons-680529 crio[978]: time="2024-12-12 00:03:14.988863010Z" level=info msg="Removing pod sandbox: 73e881f093594c3a24facc184d2bc80a71fa8dffc0ce8f39176df9eee41b2bd9" id=ca845967-1b75-4af9-964b-4e5f2da967ee name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 12 00:03:14 addons-680529 crio[978]: time="2024-12-12 00:03:14.998495749Z" level=info msg="Removed pod sandbox: 73e881f093594c3a24facc184d2bc80a71fa8dffc0ce8f39176df9eee41b2bd9" id=ca845967-1b75-4af9-964b-4e5f2da967ee name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 12 00:03:14 addons-680529 crio[978]: time="2024-12-12 00:03:14.998985707Z" level=info msg="Stopping pod sandbox: f4194d217e3b52dd268ffef45823d66a602c841357ddd0fc0106b90b6a65977a" id=d513a1db-29f9-4d21-8d3a-945bcdf48ef5 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 12 00:03:14 addons-680529 crio[978]: time="2024-12-12 00:03:14.999026231Z" level=info msg="Stopped pod sandbox (already stopped): f4194d217e3b52dd268ffef45823d66a602c841357ddd0fc0106b90b6a65977a" id=d513a1db-29f9-4d21-8d3a-945bcdf48ef5 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 12 00:03:14 addons-680529 crio[978]: time="2024-12-12 00:03:14.999475936Z" level=info msg="Removing pod sandbox: f4194d217e3b52dd268ffef45823d66a602c841357ddd0fc0106b90b6a65977a" id=d1c0c217-a28e-4421-bcfa-282b64842e32 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 12 00:03:15 addons-680529 crio[978]: time="2024-12-12 00:03:15.009065017Z" level=info msg="Removed pod sandbox: f4194d217e3b52dd268ffef45823d66a602c841357ddd0fc0106b90b6a65977a" id=d1c0c217-a28e-4421-bcfa-282b64842e32 name=/runtime.v1.RuntimeService/RemovePodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5290d168b14da       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   bdb2100553114       hello-world-app-55bf9c44b4-jqgpm
	f17aefd0e9da0       docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4                         5 minutes ago       Running             nginx                     0                   887fe82cdd28f       nginx
	3904c932956c7       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     6 minutes ago       Running             busybox                   0                   d082248e224a4       busybox
	95fb6601ca7a5       registry.k8s.io/metrics-server/metrics-server@sha256:048bcf48fc2cce517a61777e22bac782ba59ea5e9b9a54bcb42dbee99566a91f   8 minutes ago       Running             metrics-server            0                   390af4ded9379       metrics-server-84c5f94fbc-c68dp
	a3933957bd198       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4                                                        9 minutes ago       Running             coredns                   0                   a07880fe8f66c       coredns-7c65d6cfc9-ltfkm
	7490f08ddea6e       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                        9 minutes ago       Running             storage-provisioner       0                   4f0dd0feacea7       storage-provisioner
	d03a9536de261       docker.io/kindest/kindnetd@sha256:de216f6245e142905c8022d424959a65f798fcd26f5b7492b9c0b0391d735c3e                      9 minutes ago       Running             kindnet-cni               0                   f9313f7a66b9f       kindnet-5n8x6
	f5b9aebd301a3       021d2420133054f8835987db659750ff639ab6863776460264dd8025c06644ba                                                        9 minutes ago       Running             kube-proxy                0                   3cc29d3eca682       kube-proxy-rl6lb
	a7c5aafc3840b       f9c26480f1e722a7d05d7f1bb339180b19f941b23bcc928208e362df04a61270                                                        10 minutes ago      Running             kube-apiserver            0                   0dae4ef4a0f5c       kube-apiserver-addons-680529
	a8aa020a72093       d6b061e73ae454743cbfe0e3479aa23e4ed65c61d38b4408a1e7f3d3859dda8a                                                        10 minutes ago      Running             kube-scheduler            0                   5a8f6d9e80745       kube-scheduler-addons-680529
	b109b488cf6dc       9404aea098d9e80cb648d86c07d56130a1fe875ed7c2526251c2ae68a9bf07ba                                                        10 minutes ago      Running             kube-controller-manager   0                   3d88ef9153c55       kube-controller-manager-addons-680529
	df37df1745de7       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da                                                        10 minutes ago      Running             etcd                      0                   9e281b2641fe3       etcd-addons-680529
	
	
	==> coredns [a3933957bd198371a01dc1552aa688ea7af1eef840091608a7b42cfed0079b1f] <==
	[INFO] 10.244.0.20:51338 - 14902 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000087938s
	[INFO] 10.244.0.20:51338 - 63881 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00006714s
	[INFO] 10.244.0.20:58079 - 20275 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000212758s
	[INFO] 10.244.0.20:51338 - 31031 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000107991s
	[INFO] 10.244.0.20:51338 - 4310 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001553302s
	[INFO] 10.244.0.20:51338 - 29247 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.004155435s
	[INFO] 10.244.0.20:51338 - 2297 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000109764s
	[INFO] 10.244.0.20:44539 - 52605 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000048334s
	[INFO] 10.244.0.20:45916 - 59858 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000282572s
	[INFO] 10.244.0.20:45916 - 59998 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000118444s
	[INFO] 10.244.0.20:45916 - 11821 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00008071s
	[INFO] 10.244.0.20:45916 - 32262 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000083081s
	[INFO] 10.244.0.20:45916 - 10957 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000065039s
	[INFO] 10.244.0.20:45916 - 60701 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000050442s
	[INFO] 10.244.0.20:44539 - 61877 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00009351s
	[INFO] 10.244.0.20:45916 - 14185 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001570269s
	[INFO] 10.244.0.20:44539 - 15230 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00006293s
	[INFO] 10.244.0.20:44539 - 34094 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000072349s
	[INFO] 10.244.0.20:44539 - 60880 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.0000872s
	[INFO] 10.244.0.20:44539 - 18897 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00007468s
	[INFO] 10.244.0.20:44539 - 6405 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.003324071s
	[INFO] 10.244.0.20:45916 - 51746 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.005168437s
	[INFO] 10.244.0.20:45916 - 21596 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000055923s
	[INFO] 10.244.0.20:44539 - 21804 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001112558s
	[INFO] 10.244.0.20:44539 - 43659 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000074697s
	
	
	==> describe nodes <==
	Name:               addons-680529
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-680529
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7fa9733e2a94b345defa62bfa3d9dc225cfbd458
	                    minikube.k8s.io/name=addons-680529
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_11T23_54_15_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-680529
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 11 Dec 2024 23:54:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-680529
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 12 Dec 2024 00:04:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 12 Dec 2024 00:02:24 +0000   Wed, 11 Dec 2024 23:54:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 12 Dec 2024 00:02:24 +0000   Wed, 11 Dec 2024 23:54:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 12 Dec 2024 00:02:24 +0000   Wed, 11 Dec 2024 23:54:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 12 Dec 2024 00:02:24 +0000   Wed, 11 Dec 2024 23:55:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-680529
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc891a14bf004a35b657861066d42169
	  System UUID:                0af98c1a-d97e-4b29-afb3-458739a2719a
	  Boot ID:                    841b5c7a-a318-4122-9975-963f80741cc3
	  Kernel Version:             5.15.0-1072-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m42s
	  default                     hello-world-app-55bf9c44b4-jqgpm         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m52s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m13s
	  kube-system                 coredns-7c65d6cfc9-ltfkm                 100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     9m55s
	  kube-system                 etcd-addons-680529                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         10m
	  kube-system                 kindnet-5n8x6                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      9m55s
	  kube-system                 kube-apiserver-addons-680529             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-addons-680529    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-rl6lb                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m55s
	  kube-system                 kube-scheduler-addons-680529             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 metrics-server-84c5f94fbc-c68dp          100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         9m49s
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             420Mi (5%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 9m48s              kube-proxy       
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node addons-680529 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node addons-680529 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node addons-680529 status is now: NodeHasSufficientPID
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  10m                kubelet          Node addons-680529 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m                kubelet          Node addons-680529 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m                kubelet          Node addons-680529 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           9m56s              node-controller  Node addons-680529 event: Registered Node addons-680529 in Controller
	  Normal   NodeReady                9m7s               kubelet          Node addons-680529 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec11 22:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014241] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.484923] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027949] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.031181] systemd[1]: /lib/systemd/system/cloud-init.service:20: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.017950] systemd[1]: /lib/systemd/system/cloud-init-hotplugd.socket:11: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.643593] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +5.899190] kauditd_printk_skb: 36 callbacks suppressed
	[Dec11 23:00] hrtimer: interrupt took 6733940 ns
	[Dec11 23:22] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [df37df1745de7383f027e1f0be0f193bd67d66fbef8865d42ff03f2555701a30] <==
	{"level":"warn","ts":"2024-12-11T23:54:21.653775Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"475.016139ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-11T23:54:21.654614Z","caller":"traceutil/trace.go:171","msg":"trace[1203767566] range","detail":"{range_begin:/registry/serviceaccounts; range_end:; response_count:0; response_revision:385; }","duration":"475.867538ms","start":"2024-12-11T23:54:21.178727Z","end":"2024-12-11T23:54:21.654594Z","steps":["trace[1203767566] 'agreement among raft nodes before linearized reading'  (duration: 262.674349ms)","trace[1203767566] 'range keys from in-memory index tree'  (duration: 189.384529ms)","trace[1203767566] 'filter and sort the key-value pairs'  (duration: 22.937011ms)"],"step_count":3}
	{"level":"warn","ts":"2024-12-11T23:54:21.654936Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-11T23:54:21.178686Z","time spent":"476.230092ms","remote":"127.0.0.1:59230","response type":"/etcdserverpb.KV/Range","request count":0,"request size":29,"response count":0,"response size":29,"request content":"key:\"/registry/serviceaccounts\" limit:1 "}
	{"level":"info","ts":"2024-12-11T23:54:21.770483Z","caller":"traceutil/trace.go:171","msg":"trace[1290719041] transaction","detail":"{read_only:false; response_revision:386; number_of_response:1; }","duration":"122.008564ms","start":"2024-12-11T23:54:21.648459Z","end":"2024-12-11T23:54:21.770468Z","steps":["trace[1290719041] 'process raft request'  (duration: 121.907577ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-11T23:54:23.336671Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.513976ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"warn","ts":"2024-12-11T23:54:23.379068Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"120.91419ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128033846599658909 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/deployments/default/cloud-spanner-emulator\" mod_revision:0 > success:<request_put:<key:\"/registry/deployments/default/cloud-spanner-emulator\" value_size:2570 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-12-11T23:54:23.385475Z","caller":"traceutil/trace.go:171","msg":"trace[1701703965] transaction","detail":"{read_only:false; response_revision:393; number_of_response:1; }","duration":"172.195723ms","start":"2024-12-11T23:54:23.213253Z","end":"2024-12-11T23:54:23.385449Z","steps":["trace[1701703965] 'compare'  (duration: 62.593774ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-11T23:54:23.390910Z","caller":"traceutil/trace.go:171","msg":"trace[17707196] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:392; }","duration":"177.755625ms","start":"2024-12-11T23:54:23.213130Z","end":"2024-12-11T23:54:23.390886Z","steps":["trace[17707196] 'range keys from in-memory index tree'  (duration: 123.355292ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-11T23:54:23.392941Z","caller":"traceutil/trace.go:171","msg":"trace[1818917909] transaction","detail":"{read_only:false; response_revision:395; number_of_response:1; }","duration":"139.273538ms","start":"2024-12-11T23:54:23.253656Z","end":"2024-12-11T23:54:23.392929Z","steps":["trace[1818917909] 'process raft request'  (duration: 139.051675ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-11T23:54:23.393100Z","caller":"traceutil/trace.go:171","msg":"trace[810089605] transaction","detail":"{read_only:false; response_revision:394; number_of_response:1; }","duration":"178.740937ms","start":"2024-12-11T23:54:23.214352Z","end":"2024-12-11T23:54:23.393092Z","steps":["trace[810089605] 'process raft request'  (duration: 171.0837ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-11T23:54:23.577334Z","caller":"traceutil/trace.go:171","msg":"trace[1751098335] transaction","detail":"{read_only:false; response_revision:396; number_of_response:1; }","duration":"111.770037ms","start":"2024-12-11T23:54:23.465552Z","end":"2024-12-11T23:54:23.577322Z","steps":["trace[1751098335] 'process raft request'  (duration: 111.661091ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-11T23:54:24.850250Z","caller":"traceutil/trace.go:171","msg":"trace[1166015492] transaction","detail":"{read_only:false; response_revision:440; number_of_response:1; }","duration":"143.11542ms","start":"2024-12-11T23:54:24.707074Z","end":"2024-12-11T23:54:24.850189Z","steps":["trace[1166015492] 'process raft request'  (duration: 128.523134ms)","trace[1166015492] 'compare'  (duration: 14.491094ms)"],"step_count":2}
	{"level":"info","ts":"2024-12-11T23:54:24.911167Z","caller":"traceutil/trace.go:171","msg":"trace[1052296647] transaction","detail":"{read_only:false; response_revision:441; number_of_response:1; }","duration":"196.419435ms","start":"2024-12-11T23:54:24.714735Z","end":"2024-12-11T23:54:24.911154Z","steps":["trace[1052296647] 'process raft request'  (duration: 196.309036ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-11T23:54:25.055043Z","caller":"traceutil/trace.go:171","msg":"trace[214184549] transaction","detail":"{read_only:false; response_revision:448; number_of_response:1; }","duration":"130.769497ms","start":"2024-12-11T23:54:24.924254Z","end":"2024-12-11T23:54:25.055023Z","steps":["trace[214184549] 'process raft request'  (duration: 83.648791ms)","trace[214184549] 'compare'  (duration: 46.89173ms)"],"step_count":2}
	{"level":"info","ts":"2024-12-11T23:54:25.056665Z","caller":"traceutil/trace.go:171","msg":"trace[1812331807] transaction","detail":"{read_only:false; response_revision:449; number_of_response:1; }","duration":"128.351936ms","start":"2024-12-11T23:54:24.928287Z","end":"2024-12-11T23:54:25.056639Z","steps":["trace[1812331807] 'process raft request'  (duration: 126.660862ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-11T23:54:25.056818Z","caller":"traceutil/trace.go:171","msg":"trace[625613241] linearizableReadLoop","detail":"{readStateIndex:461; appliedIndex:459; }","duration":"109.103373ms","start":"2024-12-11T23:54:24.947684Z","end":"2024-12-11T23:54:25.056788Z","steps":["trace[625613241] 'read index received'  (duration: 60.17445ms)","trace[625613241] 'applied index is now lower than readState.Index'  (duration: 48.926716ms)"],"step_count":2}
	{"level":"info","ts":"2024-12-11T23:54:25.057023Z","caller":"traceutil/trace.go:171","msg":"trace[1337573269] transaction","detail":"{read_only:false; response_revision:450; number_of_response:1; }","duration":"109.260917ms","start":"2024-12-11T23:54:24.947753Z","end":"2024-12-11T23:54:25.057014Z","steps":["trace[1337573269] 'process raft request'  (duration: 108.181517ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-11T23:54:25.057096Z","caller":"traceutil/trace.go:171","msg":"trace[864608425] transaction","detail":"{read_only:false; response_revision:451; number_of_response:1; }","duration":"109.276613ms","start":"2024-12-11T23:54:24.947808Z","end":"2024-12-11T23:54:25.057084Z","steps":["trace[864608425] 'process raft request'  (duration: 108.295764ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-11T23:54:25.057504Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.79375ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/roles/kube-system/system:persistent-volume-provisioner\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-11T23:54:25.058062Z","caller":"traceutil/trace.go:171","msg":"trace[1044297474] range","detail":"{range_begin:/registry/roles/kube-system/system:persistent-volume-provisioner; range_end:; response_count:0; response_revision:454; }","duration":"110.368263ms","start":"2024-12-11T23:54:24.947680Z","end":"2024-12-11T23:54:25.058048Z","steps":["trace[1044297474] 'agreement among raft nodes before linearized reading'  (duration: 109.775091ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-11T23:54:25.347266Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.395011ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-680529\" ","response":"range_response_count:1 size:5745"}
	{"level":"info","ts":"2024-12-11T23:54:25.354068Z","caller":"traceutil/trace.go:171","msg":"trace[140567419] range","detail":"{range_begin:/registry/minions/addons-680529; range_end:; response_count:1; response_revision:473; }","duration":"110.200127ms","start":"2024-12-11T23:54:25.243851Z","end":"2024-12-11T23:54:25.354051Z","steps":["trace[140567419] 'agreement among raft nodes before linearized reading'  (duration: 103.233668ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-12T00:04:09.512192Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1853}
	{"level":"info","ts":"2024-12-12T00:04:09.553311Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1853,"took":"40.495607ms","hash":1275112395,"current-db-size-bytes":8384512,"current-db-size":"8.4 MB","current-db-size-in-use-bytes":5181440,"current-db-size-in-use":"5.2 MB"}
	{"level":"info","ts":"2024-12-12T00:04:09.553366Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1275112395,"revision":1853,"compact-revision":-1}
	
	
	==> kernel <==
	 00:04:15 up  1:46,  0 users,  load average: 0.27, 0.97, 2.01
	Linux addons-680529 5.15.0-1072-aws #78~20.04.1-Ubuntu SMP Wed Oct 9 15:29:54 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [d03a9536de2615711e6aafce6d843bca16d7fde5a0440f9e84a55e33b7f3e2b5] <==
	I1212 00:02:06.423306       1 main.go:301] handling current node
	I1212 00:02:16.430790       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:02:16.430821       1 main.go:301] handling current node
	I1212 00:02:26.423839       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:02:26.423871       1 main.go:301] handling current node
	I1212 00:02:36.423503       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:02:36.423534       1 main.go:301] handling current node
	I1212 00:02:46.423050       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:02:46.424061       1 main.go:301] handling current node
	I1212 00:02:56.423034       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:02:56.423159       1 main.go:301] handling current node
	I1212 00:03:06.430204       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:03:06.430244       1 main.go:301] handling current node
	I1212 00:03:16.423178       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:03:16.423214       1 main.go:301] handling current node
	I1212 00:03:26.423959       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:03:26.424026       1 main.go:301] handling current node
	I1212 00:03:36.430704       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:03:36.430737       1 main.go:301] handling current node
	I1212 00:03:46.423033       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:03:46.423063       1 main.go:301] handling current node
	I1212 00:03:56.423511       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:03:56.423556       1 main.go:301] handling current node
	I1212 00:04:06.430617       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 00:04:06.430725       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a7c5aafc3840bb42186d9aea30a256badd54fa94e27ee9566d8805b029ea85a9] <==
	E1211 23:56:57.739139       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.204.160:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.204.160:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.204.160:443: connect: connection refused" logger="UnhandledError"
	I1211 23:56:57.809537       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1211 23:57:42.296301       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:49024: use of closed network connection
	E1211 23:57:42.717363       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:49058: use of closed network connection
	I1211 23:57:52.072376       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.107.224.17"}
	I1211 23:58:27.582233       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1211 23:58:42.086534       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1211 23:58:42.086697       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1211 23:58:42.109200       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1211 23:58:42.109262       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1211 23:58:42.133832       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1211 23:58:42.134807       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1211 23:58:42.245941       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1211 23:58:42.246076       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1211 23:58:42.276269       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1211 23:58:42.276705       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1211 23:58:43.245529       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1211 23:58:43.277146       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1211 23:58:43.320032       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1211 23:58:55.793598       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1211 23:58:56.830906       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1211 23:59:01.356984       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1211 23:59:01.660892       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.100.191.192"}
	I1212 00:01:22.380411       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.101.146.50"}
	E1212 00:02:16.827795       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	
	
	==> kube-controller-manager [b109b488cf6dcfed2ad5d0f9c0b38e27f828da9b0aee40471d360272905098d7] <==
	W1212 00:02:20.614038       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1212 00:02:20.614203       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1212 00:02:24.500827       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-680529"
	W1212 00:02:26.757073       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1212 00:02:26.757117       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1212 00:02:28.278905       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1212 00:02:28.278970       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1212 00:02:49.146411       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="local-path-storage"
	I1212 00:02:50.685865       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/cloud-spanner-emulator-dc5db94f4" duration="7.368µs"
	W1212 00:02:52.941458       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1212 00:02:52.941500       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1212 00:02:59.899963       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1212 00:02:59.900010       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1212 00:03:00.203033       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1212 00:03:00.203080       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1212 00:03:06.549968       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1212 00:03:06.550012       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1212 00:03:32.665431       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1212 00:03:32.665569       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1212 00:03:36.825239       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1212 00:03:36.825280       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1212 00:03:51.831105       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1212 00:03:51.831149       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1212 00:03:52.278171       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1212 00:03:52.278214       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [f5b9aebd301a382d4a983705cfb7c3bf32fe23eaa2d4a2e7438cad2251148c73] <==
	I1211 23:54:24.089049       1 server_linux.go:66] "Using iptables proxy"
	I1211 23:54:25.397625       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E1211 23:54:25.495735       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1211 23:54:26.266322       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1211 23:54:26.272807       1 server_linux.go:169] "Using iptables Proxier"
	I1211 23:54:26.466871       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1211 23:54:26.470439       1 server.go:483] "Version info" version="v1.31.2"
	I1211 23:54:26.470673       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1211 23:54:26.472641       1 config.go:199] "Starting service config controller"
	I1211 23:54:26.472724       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1211 23:54:26.472767       1 config.go:105] "Starting endpoint slice config controller"
	I1211 23:54:26.472813       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1211 23:54:26.473239       1 config.go:328] "Starting node config controller"
	I1211 23:54:26.473296       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1211 23:54:26.575402       1 shared_informer.go:320] Caches are synced for node config
	I1211 23:54:26.611677       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1211 23:54:26.625457       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [a8aa020a72093940be9978a746548762becd42c86b1a2fbbcec72c604d118bb8] <==
	W1211 23:54:12.000474       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1211 23:54:12.002891       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1211 23:54:12.000682       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1211 23:54:12.003109       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1211 23:54:12.002522       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1211 23:54:12.003235       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1211 23:54:12.841854       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1211 23:54:12.841964       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1211 23:54:12.843119       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1211 23:54:12.843211       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1211 23:54:12.856909       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1211 23:54:12.857037       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1211 23:54:13.107467       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1211 23:54:13.107623       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1211 23:54:13.115151       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1211 23:54:13.115200       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1211 23:54:13.194793       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1211 23:54:13.194840       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1211 23:54:13.212461       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1211 23:54:13.212598       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1211 23:54:13.229088       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1211 23:54:13.229232       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1211 23:54:13.243534       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1211 23:54:13.243578       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1211 23:54:16.179736       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 12 00:02:51 addons-680529 kubelet[1516]: I1212 00:02:51.769248    1516 scope.go:117] "RemoveContainer" containerID="caa2545993e4268e2fa19655dc7da259e67a7dd09530c52eecd1a49d4e6f7ecc"
	Dec 12 00:02:51 addons-680529 kubelet[1516]: I1212 00:02:51.788700    1516 scope.go:117] "RemoveContainer" containerID="caa2545993e4268e2fa19655dc7da259e67a7dd09530c52eecd1a49d4e6f7ecc"
	Dec 12 00:02:51 addons-680529 kubelet[1516]: E1212 00:02:51.789106    1516 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"caa2545993e4268e2fa19655dc7da259e67a7dd09530c52eecd1a49d4e6f7ecc\": container with ID starting with caa2545993e4268e2fa19655dc7da259e67a7dd09530c52eecd1a49d4e6f7ecc not found: ID does not exist" containerID="caa2545993e4268e2fa19655dc7da259e67a7dd09530c52eecd1a49d4e6f7ecc"
	Dec 12 00:02:51 addons-680529 kubelet[1516]: I1212 00:02:51.789144    1516 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"caa2545993e4268e2fa19655dc7da259e67a7dd09530c52eecd1a49d4e6f7ecc"} err="failed to get container status \"caa2545993e4268e2fa19655dc7da259e67a7dd09530c52eecd1a49d4e6f7ecc\": rpc error: code = NotFound desc = could not find container \"caa2545993e4268e2fa19655dc7da259e67a7dd09530c52eecd1a49d4e6f7ecc\": container with ID starting with caa2545993e4268e2fa19655dc7da259e67a7dd09530c52eecd1a49d4e6f7ecc not found: ID does not exist"
	Dec 12 00:02:52 addons-680529 kubelet[1516]: I1212 00:02:52.537625    1516 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9facd92-c7e6-4d7d-93ca-62c980c74791" path="/var/lib/kubelet/pods/f9facd92-c7e6-4d7d-93ca-62c980c74791/volumes"
	Dec 12 00:02:54 addons-680529 kubelet[1516]: E1212 00:02:54.719738    1516 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961774719504375,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:615298,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:02:54 addons-680529 kubelet[1516]: E1212 00:02:54.719774    1516 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961774719504375,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:615298,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:03:04 addons-680529 kubelet[1516]: E1212 00:03:04.723125    1516 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961784722861356,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:615298,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:03:04 addons-680529 kubelet[1516]: E1212 00:03:04.723165    1516 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961784722861356,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:615298,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:03:14 addons-680529 kubelet[1516]: E1212 00:03:14.725119    1516 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961794724898272,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:615298,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:03:14 addons-680529 kubelet[1516]: E1212 00:03:14.725154    1516 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961794724898272,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:615298,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:03:24 addons-680529 kubelet[1516]: E1212 00:03:24.727208    1516 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961804726951359,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:615298,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:03:24 addons-680529 kubelet[1516]: E1212 00:03:24.727245    1516 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961804726951359,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:615298,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:03:30 addons-680529 kubelet[1516]: I1212 00:03:30.535947    1516 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Dec 12 00:03:34 addons-680529 kubelet[1516]: E1212 00:03:34.729765    1516 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961814729558760,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:615298,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:03:34 addons-680529 kubelet[1516]: E1212 00:03:34.729810    1516 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961814729558760,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:615298,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:03:44 addons-680529 kubelet[1516]: E1212 00:03:44.732356    1516 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961824732108577,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:615298,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:03:44 addons-680529 kubelet[1516]: E1212 00:03:44.732390    1516 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961824732108577,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:615298,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:03:54 addons-680529 kubelet[1516]: E1212 00:03:54.734789    1516 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961834734566474,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:615298,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:03:54 addons-680529 kubelet[1516]: E1212 00:03:54.734828    1516 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961834734566474,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:615298,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:04:04 addons-680529 kubelet[1516]: E1212 00:04:04.737905    1516 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961844737653284,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:615298,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:04:04 addons-680529 kubelet[1516]: E1212 00:04:04.737943    1516 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961844737653284,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:615298,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:04:14 addons-680529 kubelet[1516]: E1212 00:04:14.568461    1516 container_manager_linux.go:513] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /docker/1574e2ba69a221fa0bfdd7385787c117f5f9c4f65c607102e868f146d42ac6e2, memory: /docker/1574e2ba69a221fa0bfdd7385787c117f5f9c4f65c607102e868f146d42ac6e2/system.slice/kubelet.service"
	Dec 12 00:04:14 addons-680529 kubelet[1516]: E1212 00:04:14.740482    1516 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961854740157688,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:615298,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:04:14 addons-680529 kubelet[1516]: E1212 00:04:14.740516    1516 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961854740157688,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:615298,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [7490f08ddea6e00aacfd56cb8ac004428cc45925332eba9a484eff6d8c5f51ae] <==
	I1211 23:55:08.091934       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1211 23:55:08.104292       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1211 23:55:08.104409       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1211 23:55:08.114587       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1211 23:55:08.114852       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-680529_667ac295-5fe0-4dd5-9b7a-57923818fe01!
	I1211 23:55:08.115033       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ecd09072-3fb9-47fc-b701-e486ef4c06c6", APIVersion:"v1", ResourceVersion:"931", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-680529_667ac295-5fe0-4dd5-9b7a-57923818fe01 became leader
	I1211 23:55:08.215363       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-680529_667ac295-5fe0-4dd5-9b7a-57923818fe01!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-680529 -n addons-680529
helpers_test.go:261: (dbg) Run:  kubectl --context addons-680529 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-680529 addons disable metrics-server --alsologtostderr -v=1
--- FAIL: TestAddons/parallel/MetricsServer (367.94s)

                                                
                                    

Test pass (297/330)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 9.21
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.21
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.2/json-events 6.63
13 TestDownloadOnly/v1.31.2/preload-exists 0
17 TestDownloadOnly/v1.31.2/LogsDuration 0.07
18 TestDownloadOnly/v1.31.2/DeleteAll 0.22
19 TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.59
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.09
27 TestAddons/Setup 246.13
31 TestAddons/serial/GCPAuth/Namespaces 0.2
32 TestAddons/serial/GCPAuth/FakeCredentials 9.96
35 TestAddons/parallel/Registry 17.55
37 TestAddons/parallel/InspektorGadget 11.74
40 TestAddons/parallel/CSI 42.2
41 TestAddons/parallel/Headlamp 15.88
42 TestAddons/parallel/CloudSpanner 6.55
43 TestAddons/parallel/LocalPath 51.62
44 TestAddons/parallel/NvidiaDevicePlugin 6.51
45 TestAddons/parallel/Yakd 11.71
47 TestAddons/StoppedEnableDisable 12.18
48 TestCertOptions 35.54
49 TestCertExpiration 244.1
51 TestForceSystemdFlag 40.08
52 TestForceSystemdEnv 44.15
58 TestErrorSpam/setup 33.86
59 TestErrorSpam/start 0.77
60 TestErrorSpam/status 1.11
61 TestErrorSpam/pause 1.81
62 TestErrorSpam/unpause 1.81
63 TestErrorSpam/stop 1.44
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 51.3
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 23.81
70 TestFunctional/serial/KubeContext 0.06
71 TestFunctional/serial/KubectlGetPods 0.09
74 TestFunctional/serial/CacheCmd/cache/add_remote 4.61
75 TestFunctional/serial/CacheCmd/cache/add_local 1.41
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
79 TestFunctional/serial/CacheCmd/cache/cache_reload 2.3
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.15
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
83 TestFunctional/serial/ExtraConfig 40.97
84 TestFunctional/serial/ComponentHealth 0.11
85 TestFunctional/serial/LogsCmd 1.77
86 TestFunctional/serial/LogsFileCmd 1.75
87 TestFunctional/serial/InvalidService 4.52
89 TestFunctional/parallel/ConfigCmd 0.52
90 TestFunctional/parallel/DashboardCmd 14.5
91 TestFunctional/parallel/DryRun 0.4
92 TestFunctional/parallel/InternationalLanguage 0.25
93 TestFunctional/parallel/StatusCmd 1.1
97 TestFunctional/parallel/ServiceCmdConnect 10.62
98 TestFunctional/parallel/AddonsCmd 0.15
99 TestFunctional/parallel/PersistentVolumeClaim 27.8
101 TestFunctional/parallel/SSHCmd 0.7
102 TestFunctional/parallel/CpCmd 2.15
104 TestFunctional/parallel/FileSync 0.28
105 TestFunctional/parallel/CertSync 1.69
109 TestFunctional/parallel/NodeLabels 0.09
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.95
113 TestFunctional/parallel/License 0.27
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.69
116 TestFunctional/parallel/Version/short 0.09
117 TestFunctional/parallel/Version/components 1.23
118 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
120 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.49
121 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
122 TestFunctional/parallel/ImageCommands/ImageListTable 0.35
123 TestFunctional/parallel/ImageCommands/ImageListJson 0.29
124 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
125 TestFunctional/parallel/ImageCommands/ImageBuild 4
126 TestFunctional/parallel/ImageCommands/Setup 0.62
127 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.62
128 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.97
129 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.18
130 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.53
131 TestFunctional/parallel/ImageCommands/ImageRemove 0.57
132 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.86
133 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.61
134 TestFunctional/parallel/UpdateContextCmd/no_changes 0.22
135 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.23
136 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.21
137 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.12
138 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
142 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
143 TestFunctional/parallel/MountCmd/any-port 9.08
144 TestFunctional/parallel/MountCmd/specific-port 2.4
145 TestFunctional/parallel/MountCmd/VerifyCleanup 2.7
146 TestFunctional/parallel/ServiceCmd/DeployApp 6.22
147 TestFunctional/parallel/ProfileCmd/profile_not_create 0.46
148 TestFunctional/parallel/ProfileCmd/profile_list 0.42
149 TestFunctional/parallel/ProfileCmd/profile_json_output 0.41
150 TestFunctional/parallel/ServiceCmd/List 0.62
151 TestFunctional/parallel/ServiceCmd/JSONOutput 0.65
152 TestFunctional/parallel/ServiceCmd/HTTPS 0.49
153 TestFunctional/parallel/ServiceCmd/Format 0.51
154 TestFunctional/parallel/ServiceCmd/URL 0.59
155 TestFunctional/delete_echo-server_images 0.03
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
161 TestMultiControlPlane/serial/StartCluster 175.94
162 TestMultiControlPlane/serial/DeployApp 9.18
163 TestMultiControlPlane/serial/PingHostFromPods 1.68
164 TestMultiControlPlane/serial/AddWorkerNode 35.14
165 TestMultiControlPlane/serial/NodeLabels 0.11
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.03
167 TestMultiControlPlane/serial/CopyFile 18.99
168 TestMultiControlPlane/serial/StopSecondaryNode 12.72
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.76
170 TestMultiControlPlane/serial/RestartSecondaryNode 33.39
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.29
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 154.57
173 TestMultiControlPlane/serial/DeleteSecondaryNode 12.53
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.76
175 TestMultiControlPlane/serial/StopCluster 35.66
176 TestMultiControlPlane/serial/RestartCluster 100.37
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.73
178 TestMultiControlPlane/serial/AddSecondaryNode 74.21
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.08
183 TestJSONOutput/start/Command 49.7
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.76
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.69
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 5.94
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.23
208 TestKicCustomNetwork/create_custom_network 38.47
209 TestKicCustomNetwork/use_default_bridge_network 34.41
210 TestKicExistingNetwork 34.21
211 TestKicCustomSubnet 34.46
212 TestKicStaticIP 34.15
213 TestMainNoArgs 0.07
214 TestMinikubeProfile 70.79
217 TestMountStart/serial/StartWithMountFirst 6.47
218 TestMountStart/serial/VerifyMountFirst 0.26
219 TestMountStart/serial/StartWithMountSecond 9.39
220 TestMountStart/serial/VerifyMountSecond 0.27
221 TestMountStart/serial/DeleteFirst 1.65
222 TestMountStart/serial/VerifyMountPostDelete 0.28
223 TestMountStart/serial/Stop 1.2
224 TestMountStart/serial/RestartStopped 7.93
225 TestMountStart/serial/VerifyMountPostStop 0.26
228 TestMultiNode/serial/FreshStart2Nodes 78.74
229 TestMultiNode/serial/DeployApp2Nodes 7.7
230 TestMultiNode/serial/PingHostFrom2Pods 1.06
231 TestMultiNode/serial/AddNode 28.39
232 TestMultiNode/serial/MultiNodeLabels 0.1
233 TestMultiNode/serial/ProfileList 0.69
234 TestMultiNode/serial/CopyFile 10.11
235 TestMultiNode/serial/StopNode 2.25
236 TestMultiNode/serial/StartAfterStop 10.06
237 TestMultiNode/serial/RestartKeepsNodes 79.98
238 TestMultiNode/serial/DeleteNode 5.32
239 TestMultiNode/serial/StopMultiNode 23.83
240 TestMultiNode/serial/RestartMultiNode 53.31
241 TestMultiNode/serial/ValidateNameConflict 33.2
246 TestPreload 130
248 TestScheduledStopUnix 108.54
251 TestInsufficientStorage 10.64
252 TestRunningBinaryUpgrade 86.19
254 TestKubernetesUpgrade 390.54
255 TestMissingContainerUpgrade 155.06
257 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
258 TestNoKubernetes/serial/StartWithK8s 40.22
259 TestNoKubernetes/serial/StartWithStopK8s 6.96
260 TestNoKubernetes/serial/Start 10.07
261 TestNoKubernetes/serial/VerifyK8sNotRunning 0.36
262 TestNoKubernetes/serial/ProfileList 1.2
263 TestNoKubernetes/serial/Stop 1.27
264 TestNoKubernetes/serial/StartNoArgs 7.17
265 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.39
266 TestStoppedBinaryUpgrade/Setup 1.46
267 TestStoppedBinaryUpgrade/Upgrade 93.49
268 TestStoppedBinaryUpgrade/MinikubeLogs 1.11
277 TestPause/serial/Start 55.33
278 TestPause/serial/SecondStartNoReconfiguration 35.75
279 TestPause/serial/Pause 1.13
280 TestPause/serial/VerifyStatus 0.4
281 TestPause/serial/Unpause 1.09
282 TestPause/serial/PauseAgain 1.03
283 TestPause/serial/DeletePaused 2.78
284 TestPause/serial/VerifyDeletedResources 0.37
292 TestNetworkPlugins/group/false 5.88
297 TestStartStop/group/old-k8s-version/serial/FirstStart 165.45
299 TestStartStop/group/no-preload/serial/FirstStart 74.24
300 TestStartStop/group/old-k8s-version/serial/DeployApp 11.76
301 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.58
302 TestStartStop/group/old-k8s-version/serial/Stop 13.74
303 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.27
304 TestStartStop/group/old-k8s-version/serial/SecondStart 131.92
305 TestStartStop/group/no-preload/serial/DeployApp 10.45
306 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.38
307 TestStartStop/group/no-preload/serial/Stop 11.97
308 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
309 TestStartStop/group/no-preload/serial/SecondStart 293.58
310 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
311 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
312 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
313 TestStartStop/group/old-k8s-version/serial/Pause 3.14
315 TestStartStop/group/embed-certs/serial/FirstStart 51.61
316 TestStartStop/group/embed-certs/serial/DeployApp 10.32
317 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.07
318 TestStartStop/group/embed-certs/serial/Stop 11.93
319 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
320 TestStartStop/group/embed-certs/serial/SecondStart 267.79
321 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
322 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.1
323 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
324 TestStartStop/group/no-preload/serial/Pause 3.03
326 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 50.82
327 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.35
328 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.09
329 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.97
330 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
331 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 267.1
332 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
333 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.13
334 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.29
335 TestStartStop/group/embed-certs/serial/Pause 3.23
337 TestStartStop/group/newest-cni/serial/FirstStart 37.94
338 TestStartStop/group/newest-cni/serial/DeployApp 0
339 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.42
340 TestStartStop/group/newest-cni/serial/Stop 1.3
341 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
342 TestStartStop/group/newest-cni/serial/SecondStart 16.15
343 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
344 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
345 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.3
346 TestStartStop/group/newest-cni/serial/Pause 3.12
347 TestNetworkPlugins/group/auto/Start 53.52
348 TestNetworkPlugins/group/auto/KubeletFlags 0.3
349 TestNetworkPlugins/group/auto/NetCatPod 11.28
350 TestNetworkPlugins/group/auto/DNS 0.2
351 TestNetworkPlugins/group/auto/Localhost 0.15
352 TestNetworkPlugins/group/auto/HairPin 0.15
353 TestNetworkPlugins/group/kindnet/Start 50.56
354 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
355 TestNetworkPlugins/group/kindnet/KubeletFlags 0.3
356 TestNetworkPlugins/group/kindnet/NetCatPod 10.25
357 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
358 TestNetworkPlugins/group/kindnet/DNS 0.18
359 TestNetworkPlugins/group/kindnet/Localhost 0.16
360 TestNetworkPlugins/group/kindnet/HairPin 0.15
361 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.14
362 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.3
363 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.06
364 TestNetworkPlugins/group/calico/Start 69.73
365 TestNetworkPlugins/group/custom-flannel/Start 60.5
366 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.31
367 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.28
368 TestNetworkPlugins/group/calico/ControllerPod 6.01
369 TestNetworkPlugins/group/calico/KubeletFlags 0.3
370 TestNetworkPlugins/group/calico/NetCatPod 11.28
371 TestNetworkPlugins/group/custom-flannel/DNS 0.27
372 TestNetworkPlugins/group/custom-flannel/Localhost 0.25
373 TestNetworkPlugins/group/custom-flannel/HairPin 0.19
374 TestNetworkPlugins/group/calico/DNS 0.4
375 TestNetworkPlugins/group/calico/Localhost 0.23
376 TestNetworkPlugins/group/calico/HairPin 0.22
377 TestNetworkPlugins/group/enable-default-cni/Start 82.72
378 TestNetworkPlugins/group/flannel/Start 60.52
379 TestNetworkPlugins/group/flannel/ControllerPod 6.01
380 TestNetworkPlugins/group/flannel/KubeletFlags 0.3
381 TestNetworkPlugins/group/flannel/NetCatPod 9.27
382 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.3
383 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.31
384 TestNetworkPlugins/group/flannel/DNS 0.27
385 TestNetworkPlugins/group/flannel/Localhost 0.22
386 TestNetworkPlugins/group/flannel/HairPin 0.25
387 TestNetworkPlugins/group/enable-default-cni/DNS 0.25
388 TestNetworkPlugins/group/enable-default-cni/Localhost 0.18
389 TestNetworkPlugins/group/enable-default-cni/HairPin 0.19
390 TestNetworkPlugins/group/bridge/Start 68.49
391 TestNetworkPlugins/group/bridge/KubeletFlags 0.28
392 TestNetworkPlugins/group/bridge/NetCatPod 11.54
393 TestNetworkPlugins/group/bridge/DNS 0.17
394 TestNetworkPlugins/group/bridge/Localhost 0.15
395 TestNetworkPlugins/group/bridge/HairPin 0.14
x
+
TestDownloadOnly/v1.20.0/json-events (9.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-242646 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-242646 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (9.211519411s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (9.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1211 23:53:17.090517  272599 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I1211 23:53:17.090597  272599 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20083-267093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-242646
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-242646: exit status 85 (74.532844ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-242646 | jenkins | v1.34.0 | 11 Dec 24 23:53 UTC |          |
	|         | -p download-only-242646        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/11 23:53:07
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.23.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1211 23:53:07.924045  272605 out.go:345] Setting OutFile to fd 1 ...
	I1211 23:53:07.924238  272605 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 23:53:07.924266  272605 out.go:358] Setting ErrFile to fd 2...
	I1211 23:53:07.924291  272605 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 23:53:07.924562  272605 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20083-267093/.minikube/bin
	W1211 23:53:07.924743  272605 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20083-267093/.minikube/config/config.json: open /home/jenkins/minikube-integration/20083-267093/.minikube/config/config.json: no such file or directory
	I1211 23:53:07.925170  272605 out.go:352] Setting JSON to true
	I1211 23:53:07.926058  272605 start.go:129] hostinfo: {"hostname":"ip-172-31-24-2","uptime":5729,"bootTime":1733955459,"procs":165,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1072-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1211 23:53:07.926190  272605 start.go:139] virtualization:  
	I1211 23:53:07.928926  272605 out.go:97] [download-only-242646] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	W1211 23:53:07.929097  272605 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20083-267093/.minikube/cache/preloaded-tarball: no such file or directory
	I1211 23:53:07.929144  272605 notify.go:220] Checking for updates...
	I1211 23:53:07.931029  272605 out.go:169] MINIKUBE_LOCATION=20083
	I1211 23:53:07.933536  272605 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 23:53:07.935281  272605 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20083-267093/kubeconfig
	I1211 23:53:07.937045  272605 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20083-267093/.minikube
	I1211 23:53:07.939387  272605 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1211 23:53:07.943781  272605 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1211 23:53:07.944127  272605 driver.go:394] Setting default libvirt URI to qemu:///system
	I1211 23:53:07.971868  272605 docker.go:123] docker version: linux-27.4.0:Docker Engine - Community
	I1211 23:53:07.971968  272605 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1211 23:53:08.032387  272605 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-12-11 23:53:08.023057765 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge-n
f-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
	I1211 23:53:08.032502  272605 docker.go:318] overlay module found
	I1211 23:53:08.035741  272605 out.go:97] Using the docker driver based on user configuration
	I1211 23:53:08.035780  272605 start.go:297] selected driver: docker
	I1211 23:53:08.035788  272605 start.go:901] validating driver "docker" against <nil>
	I1211 23:53:08.035892  272605 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1211 23:53:08.087520  272605 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-12-11 23:53:08.078557757 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge-n
f-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
	I1211 23:53:08.087742  272605 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1211 23:53:08.088054  272605 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1211 23:53:08.088245  272605 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1211 23:53:08.089945  272605 out.go:169] Using Docker driver with root privileges
	I1211 23:53:08.091138  272605 cni.go:84] Creating CNI manager for ""
	I1211 23:53:08.091209  272605 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1211 23:53:08.091220  272605 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1211 23:53:08.091297  272605 start.go:340] cluster config:
	{Name:download-only-242646 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-242646 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 23:53:08.093219  272605 out.go:97] Starting "download-only-242646" primary control-plane node in "download-only-242646" cluster
	I1211 23:53:08.093252  272605 cache.go:121] Beginning downloading kic base image for docker with crio
	I1211 23:53:08.095073  272605 out.go:97] Pulling base image v0.0.45-1733912881-20083 ...
	I1211 23:53:08.095106  272605 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1211 23:53:08.095273  272605 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 in local docker daemon
	I1211 23:53:08.111173  272605 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 to local cache
	I1211 23:53:08.111361  272605 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 in local cache directory
	I1211 23:53:08.111459  272605 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 to local cache
	I1211 23:53:08.172861  272605 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I1211 23:53:08.172887  272605 cache.go:56] Caching tarball of preloaded images
	I1211 23:53:08.173035  272605 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1211 23:53:08.175157  272605 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1211 23:53:08.175191  272605 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I1211 23:53:08.260140  272605 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:59cd2ef07b53f039bfd1761b921f2a02 -> /home/jenkins/minikube-integration/20083-267093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-242646 host does not exist
	  To start a cluster, run: "minikube start -p download-only-242646"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-242646
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/json-events (6.63s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-228158 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-228158 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (6.628336522s)
--- PASS: TestDownloadOnly/v1.31.2/json-events (6.63s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/preload-exists
I1211 23:53:24.137875  272599 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
I1211 23:53:24.137915  272599 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20083-267093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-228158
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-228158: exit status 85 (71.656693ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-242646 | jenkins | v1.34.0 | 11 Dec 24 23:53 UTC |                     |
	|         | -p download-only-242646        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 11 Dec 24 23:53 UTC | 11 Dec 24 23:53 UTC |
	| delete  | -p download-only-242646        | download-only-242646 | jenkins | v1.34.0 | 11 Dec 24 23:53 UTC | 11 Dec 24 23:53 UTC |
	| start   | -o=json --download-only        | download-only-228158 | jenkins | v1.34.0 | 11 Dec 24 23:53 UTC |                     |
	|         | -p download-only-228158        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/11 23:53:17
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.23.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1211 23:53:17.559832  272808 out.go:345] Setting OutFile to fd 1 ...
	I1211 23:53:17.559966  272808 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 23:53:17.559977  272808 out.go:358] Setting ErrFile to fd 2...
	I1211 23:53:17.559983  272808 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 23:53:17.560231  272808 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20083-267093/.minikube/bin
	I1211 23:53:17.560635  272808 out.go:352] Setting JSON to true
	I1211 23:53:17.561539  272808 start.go:129] hostinfo: {"hostname":"ip-172-31-24-2","uptime":5739,"bootTime":1733955459,"procs":162,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1072-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1211 23:53:17.561613  272808 start.go:139] virtualization:  
	I1211 23:53:17.563847  272808 out.go:97] [download-only-228158] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1211 23:53:17.564067  272808 notify.go:220] Checking for updates...
	I1211 23:53:17.565316  272808 out.go:169] MINIKUBE_LOCATION=20083
	I1211 23:53:17.566917  272808 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 23:53:17.568300  272808 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20083-267093/kubeconfig
	I1211 23:53:17.569614  272808 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20083-267093/.minikube
	I1211 23:53:17.570770  272808 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1211 23:53:17.573430  272808 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1211 23:53:17.573686  272808 driver.go:394] Setting default libvirt URI to qemu:///system
	I1211 23:53:17.596152  272808 docker.go:123] docker version: linux-27.4.0:Docker Engine - Community
	I1211 23:53:17.596267  272808 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1211 23:53:17.659995  272808 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-12-11 23:53:17.65080254 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge-nf
-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
	I1211 23:53:17.660105  272808 docker.go:318] overlay module found
	I1211 23:53:17.661445  272808 out.go:97] Using the docker driver based on user configuration
	I1211 23:53:17.661487  272808 start.go:297] selected driver: docker
	I1211 23:53:17.661497  272808 start.go:901] validating driver "docker" against <nil>
	I1211 23:53:17.661602  272808 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1211 23:53:17.711695  272808 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-12-11 23:53:17.703390542 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge-n
f-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
	I1211 23:53:17.711984  272808 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1211 23:53:17.712244  272808 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1211 23:53:17.712407  272808 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1211 23:53:17.713941  272808 out.go:169] Using Docker driver with root privileges
	I1211 23:53:17.715134  272808 cni.go:84] Creating CNI manager for ""
	I1211 23:53:17.715202  272808 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1211 23:53:17.715220  272808 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1211 23:53:17.715306  272808 start.go:340] cluster config:
	{Name:download-only-228158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:download-only-228158 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 23:53:17.716628  272808 out.go:97] Starting "download-only-228158" primary control-plane node in "download-only-228158" cluster
	I1211 23:53:17.716657  272808 cache.go:121] Beginning downloading kic base image for docker with crio
	I1211 23:53:17.717779  272808 out.go:97] Pulling base image v0.0.45-1733912881-20083 ...
	I1211 23:53:17.717810  272808 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1211 23:53:17.717991  272808 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 in local docker daemon
	I1211 23:53:17.733681  272808 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 to local cache
	I1211 23:53:17.733806  272808 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 in local cache directory
	I1211 23:53:17.733830  272808 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 in local cache directory, skipping pull
	I1211 23:53:17.733840  272808 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 exists in cache, skipping pull
	I1211 23:53:17.733849  272808 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 as a tarball
	I1211 23:53:17.771622  272808 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-arm64.tar.lz4
	I1211 23:53:17.771646  272808 cache.go:56] Caching tarball of preloaded images
	I1211 23:53:17.771807  272808 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1211 23:53:17.773340  272808 out.go:97] Downloading Kubernetes v1.31.2 preload ...
	I1211 23:53:17.773371  272808 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-arm64.tar.lz4 ...
	I1211 23:53:17.865830  272808 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-arm64.tar.lz4?checksum=md5:810fe254d498dda367f4e14b5cba638f -> /home/jenkins/minikube-integration/20083-267093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-arm64.tar.lz4
	I1211 23:53:22.522275  272808 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-arm64.tar.lz4 ...
	I1211 23:53:22.522413  272808 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20083-267093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-228158 host does not exist
	  To start a cluster, run: "minikube start -p download-only-228158"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.2/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-228158
--- PASS: TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
I1211 23:53:25.406972  272599 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-562366 --alsologtostderr --binary-mirror http://127.0.0.1:39867 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-562366" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-562366
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-680529
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-680529: exit status 85 (74.302958ms)

                                                
                                                
-- stdout --
	* Profile "addons-680529" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-680529"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-680529
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-680529: exit status 85 (86.792106ms)

                                                
                                                
-- stdout --
	* Profile "addons-680529" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-680529"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/Setup (246.13s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-arm64 start -p addons-680529 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-arm64 start -p addons-680529 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (4m6.125156658s)
--- PASS: TestAddons/Setup (246.13s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.2s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-680529 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-680529 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.20s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.96s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-680529 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-680529 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [aa565b90-46f9-4d55-afbd-87d5f1c18fc3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [aa565b90-46f9-4d55-afbd-87d5f1c18fc3] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003953962s
addons_test.go:633: (dbg) Run:  kubectl --context addons-680529 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-680529 describe sa gcp-auth-test
addons_test.go:659: (dbg) Run:  kubectl --context addons-680529 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:683: (dbg) Run:  kubectl --context addons-680529 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.96s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.55s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 6.988574ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-5cc95cd69-xnkxj" [13f2d3d8-1d08-41f1-80e2-d19e09a1c46d] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.035223201s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-f2dfg" [79eadeb8-583a-4e72-87f2-bd4c865a9319] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004614138s
addons_test.go:331: (dbg) Run:  kubectl --context addons-680529 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-680529 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-680529 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.558207771s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-arm64 -p addons-680529 ip
2024/12/11 23:58:08 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-680529 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.55s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.74s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-99msf" [cd7a5667-d368-4822-95c7-30be1bfe0fb2] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.00427439s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-680529 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-680529 addons disable inspektor-gadget --alsologtostderr -v=1: (5.732119809s)
--- PASS: TestAddons/parallel/InspektorGadget (11.74s)

                                                
                                    
x
+
TestAddons/parallel/CSI (42.2s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1211 23:58:07.093743  272599 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1211 23:58:07.103780  272599 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1211 23:58:07.103810  272599 kapi.go:107] duration metric: took 10.08419ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 10.095431ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-680529 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-680529 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-680529 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-680529 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-680529 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-680529 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-680529 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-680529 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-680529 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-680529 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-680529 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [0b1bdf8f-2ab4-4cb7-ac59-d6a114ca8a38] Pending
helpers_test.go:344: "task-pv-pod" [0b1bdf8f-2ab4-4cb7-ac59-d6a114ca8a38] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [0b1bdf8f-2ab4-4cb7-ac59-d6a114ca8a38] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.003472905s
addons_test.go:511: (dbg) Run:  kubectl --context addons-680529 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-680529 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-680529 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-680529 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-680529 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-680529 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-680529 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-680529 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-680529 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-680529 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-680529 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-680529 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [d9ae29ee-4cc9-42f2-89c8-09ec09cb4f78] Pending
helpers_test.go:344: "task-pv-pod-restore" [d9ae29ee-4cc9-42f2-89c8-09ec09cb4f78] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [d9ae29ee-4cc9-42f2-89c8-09ec09cb4f78] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003708079s
addons_test.go:553: (dbg) Run:  kubectl --context addons-680529 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-680529 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-680529 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-680529 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-680529 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-680529 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.76427878s)
--- PASS: TestAddons/parallel/CSI (42.20s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (15.88s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-680529 --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-cd8ffd6fc-tpm8d" [2ee95868-03d5-4bfd-a280-de2a61c1a1d5] Pending
helpers_test.go:344: "headlamp-cd8ffd6fc-tpm8d" [2ee95868-03d5-4bfd-a280-de2a61c1a1d5] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-cd8ffd6fc-tpm8d" [2ee95868-03d5-4bfd-a280-de2a61c1a1d5] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.003767968s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-680529 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-680529 addons disable headlamp --alsologtostderr -v=1: (5.942263638s)
--- PASS: TestAddons/parallel/Headlamp (15.88s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.55s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-dc5db94f4-s2gl8" [f9facd92-c7e6-4d7d-93ca-62c980c74791] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003114164s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-680529 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.55s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (51.62s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-680529 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-680529 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-680529 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-680529 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-680529 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-680529 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-680529 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [81890203-00cf-4284-9679-6024e0b6f3d6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [81890203-00cf-4284-9679-6024e0b6f3d6] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [81890203-00cf-4284-9679-6024e0b6f3d6] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003975268s
addons_test.go:906: (dbg) Run:  kubectl --context addons-680529 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-arm64 -p addons-680529 ssh "cat /opt/local-path-provisioner/pvc-179f3b88-d822-4d7e-95c6-fe03050f1eae_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-680529 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-680529 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-680529 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-680529 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.305963908s)
--- PASS: TestAddons/parallel/LocalPath (51.62s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.51s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-pcmmw" [165e1834-cab1-404d-bc96-38a766c51940] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.008163549s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-680529 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.51s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.71s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-sknr2" [06120a3f-a0ca-46fd-98e1-7d8163b3cab4] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003639821s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-680529 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-680529 addons disable yakd --alsologtostderr -v=1: (5.705431228s)
--- PASS: TestAddons/parallel/Yakd (11.71s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.18s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-680529
addons_test.go:170: (dbg) Done: out/minikube-linux-arm64 stop -p addons-680529: (11.890013295s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-680529
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-680529
addons_test.go:183: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-680529
--- PASS: TestAddons/StoppedEnableDisable (12.18s)

                                                
                                    
x
+
TestCertOptions (35.54s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-064427 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-064427 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (32.890485931s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-064427 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-064427 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-064427 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-064427" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-064427
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-064427: (1.999915179s)
--- PASS: TestCertOptions (35.54s)

                                                
                                    
x
+
TestCertExpiration (244.1s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-447856 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-447856 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (43.136422631s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-447856 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-447856 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (18.499924096s)
helpers_test.go:175: Cleaning up "cert-expiration-447856" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-447856
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-447856: (2.463214638s)
--- PASS: TestCertExpiration (244.10s)

                                                
                                    
x
+
TestForceSystemdFlag (40.08s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-465270 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-465270 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (37.367323883s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-465270 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-465270" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-465270
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-465270: (2.343002143s)
--- PASS: TestForceSystemdFlag (40.08s)

                                                
                                    
x
+
TestForceSystemdEnv (44.15s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-637532 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-637532 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (41.422857128s)
helpers_test.go:175: Cleaning up "force-systemd-env-637532" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-637532
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-637532: (2.726294993s)
--- PASS: TestForceSystemdEnv (44.15s)

                                                
                                    
x
+
TestErrorSpam/setup (33.86s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-797223 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-797223 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-797223 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-797223 --driver=docker  --container-runtime=crio: (33.857466876s)
--- PASS: TestErrorSpam/setup (33.86s)

                                                
                                    
x
+
TestErrorSpam/start (0.77s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-797223 --log_dir /tmp/nospam-797223 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-797223 --log_dir /tmp/nospam-797223 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-797223 --log_dir /tmp/nospam-797223 start --dry-run
--- PASS: TestErrorSpam/start (0.77s)

                                                
                                    
x
+
TestErrorSpam/status (1.11s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-797223 --log_dir /tmp/nospam-797223 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-797223 --log_dir /tmp/nospam-797223 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-797223 --log_dir /tmp/nospam-797223 status
--- PASS: TestErrorSpam/status (1.11s)

                                                
                                    
x
+
TestErrorSpam/pause (1.81s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-797223 --log_dir /tmp/nospam-797223 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-797223 --log_dir /tmp/nospam-797223 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-797223 --log_dir /tmp/nospam-797223 pause
--- PASS: TestErrorSpam/pause (1.81s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.81s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-797223 --log_dir /tmp/nospam-797223 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-797223 --log_dir /tmp/nospam-797223 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-797223 --log_dir /tmp/nospam-797223 unpause
--- PASS: TestErrorSpam/unpause (1.81s)

                                                
                                    
x
+
TestErrorSpam/stop (1.44s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-797223 --log_dir /tmp/nospam-797223 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-797223 --log_dir /tmp/nospam-797223 stop: (1.245246866s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-797223 --log_dir /tmp/nospam-797223 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-797223 --log_dir /tmp/nospam-797223 stop
--- PASS: TestErrorSpam/stop (1.44s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/20083-267093/.minikube/files/etc/test/nested/copy/272599/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (51.3s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-931406 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-931406 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (51.300154994s)
--- PASS: TestFunctional/serial/StartWithProxy (51.30s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (23.81s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1212 00:06:14.153244  272599 config.go:182] Loaded profile config "functional-931406": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-931406 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-931406 --alsologtostderr -v=8: (23.810399269s)
functional_test.go:663: soft start took 23.810939018s for "functional-931406" cluster.
I1212 00:06:37.963940  272599 config.go:182] Loaded profile config "functional-931406": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/SoftStart (23.81s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-931406 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.61s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-931406 cache add registry.k8s.io/pause:3.1: (1.549738991s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-931406 cache add registry.k8s.io/pause:3.3: (1.555212446s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-931406 cache add registry.k8s.io/pause:latest: (1.50688858s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.61s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.41s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-931406 /tmp/TestFunctionalserialCacheCmdcacheadd_local4177164038/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 cache add minikube-local-cache-test:functional-931406
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 cache delete minikube-local-cache-test:functional-931406
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-931406
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.41s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-931406 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (394.642884ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-931406 cache reload: (1.263375581s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 kubectl -- --context functional-931406 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-931406 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (40.97s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-931406 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-931406 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (40.970881805s)
functional_test.go:761: restart took 40.971025612s for "functional-931406" cluster.
I1212 00:07:28.236350  272599 config.go:182] Loaded profile config "functional-931406": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/ExtraConfig (40.97s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-931406 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.77s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-931406 logs: (1.771634268s)
--- PASS: TestFunctional/serial/LogsCmd (1.77s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.75s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 logs --file /tmp/TestFunctionalserialLogsFileCmd2798463444/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-931406 logs --file /tmp/TestFunctionalserialLogsFileCmd2798463444/001/logs.txt: (1.749211099s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.75s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.52s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-931406 apply -f testdata/invalidsvc.yaml
E1212 00:07:33.045569  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/client.crt: no such file or directory" logger="UnhandledError"
E1212 00:07:33.052060  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/client.crt: no such file or directory" logger="UnhandledError"
E1212 00:07:33.063616  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/client.crt: no such file or directory" logger="UnhandledError"
E1212 00:07:33.085121  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/client.crt: no such file or directory" logger="UnhandledError"
E1212 00:07:33.126617  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/client.crt: no such file or directory" logger="UnhandledError"
E1212 00:07:33.208111  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/client.crt: no such file or directory" logger="UnhandledError"
E1212 00:07:33.369642  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/client.crt: no such file or directory" logger="UnhandledError"
E1212 00:07:33.691422  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/client.crt: no such file or directory" logger="UnhandledError"
E1212 00:07:34.333425  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-931406
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-931406: exit status 115 (385.431632ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32242 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-931406 delete -f testdata/invalidsvc.yaml
E1212 00:07:35.615452  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/serial/InvalidService (4.52s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-931406 config get cpus: exit status 14 (70.626047ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-931406 config get cpus: exit status 14 (96.986835ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-931406 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-931406 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 302823: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (14.50s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-931406 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-931406 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (177.553989ms)

                                                
                                                
-- stdout --
	* [functional-931406] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20083
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20083-267093/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20083-267093/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:08:19.161547  301973 out.go:345] Setting OutFile to fd 1 ...
	I1212 00:08:19.161689  301973 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1212 00:08:19.161700  301973 out.go:358] Setting ErrFile to fd 2...
	I1212 00:08:19.161706  301973 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1212 00:08:19.161941  301973 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20083-267093/.minikube/bin
	I1212 00:08:19.162342  301973 out.go:352] Setting JSON to false
	I1212 00:08:19.163245  301973 start.go:129] hostinfo: {"hostname":"ip-172-31-24-2","uptime":6641,"bootTime":1733955459,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1072-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1212 00:08:19.163319  301973 start.go:139] virtualization:  
	I1212 00:08:19.165155  301973 out.go:177] * [functional-931406] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1212 00:08:19.167014  301973 out.go:177]   - MINIKUBE_LOCATION=20083
	I1212 00:08:19.167064  301973 notify.go:220] Checking for updates...
	I1212 00:08:19.169676  301973 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:08:19.170945  301973 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20083-267093/kubeconfig
	I1212 00:08:19.172266  301973 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20083-267093/.minikube
	I1212 00:08:19.173440  301973 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 00:08:19.174708  301973 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:08:19.176665  301973 config.go:182] Loaded profile config "functional-931406": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 00:08:19.177204  301973 driver.go:394] Setting default libvirt URI to qemu:///system
	I1212 00:08:19.204184  301973 docker.go:123] docker version: linux-27.4.0:Docker Engine - Community
	I1212 00:08:19.204309  301973 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:08:19.269653  301973 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-12-12 00:08:19.259761003 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge-n
f-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
	I1212 00:08:19.269772  301973 docker.go:318] overlay module found
	I1212 00:08:19.273835  301973 out.go:177] * Using the docker driver based on existing profile
	I1212 00:08:19.275977  301973 start.go:297] selected driver: docker
	I1212 00:08:19.276003  301973 start.go:901] validating driver "docker" against &{Name:functional-931406 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-931406 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:08:19.276118  301973 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:08:19.278398  301973 out.go:201] 
	W1212 00:08:19.280331  301973 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1212 00:08:19.281841  301973 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-931406 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-931406 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-931406 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (247.058284ms)

                                                
                                                
-- stdout --
	* [functional-931406] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20083
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20083-267093/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20083-267093/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:08:20.689992  302369 out.go:345] Setting OutFile to fd 1 ...
	I1212 00:08:20.690467  302369 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1212 00:08:20.691252  302369 out.go:358] Setting ErrFile to fd 2...
	I1212 00:08:20.691299  302369 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1212 00:08:20.691731  302369 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20083-267093/.minikube/bin
	I1212 00:08:20.692194  302369 out.go:352] Setting JSON to false
	I1212 00:08:20.693296  302369 start.go:129] hostinfo: {"hostname":"ip-172-31-24-2","uptime":6642,"bootTime":1733955459,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1072-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1212 00:08:20.693402  302369 start.go:139] virtualization:  
	I1212 00:08:20.695548  302369 out.go:177] * [functional-931406] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	I1212 00:08:20.697001  302369 out.go:177]   - MINIKUBE_LOCATION=20083
	I1212 00:08:20.697082  302369 notify.go:220] Checking for updates...
	I1212 00:08:20.699068  302369 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:08:20.700426  302369 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20083-267093/kubeconfig
	I1212 00:08:20.701840  302369 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20083-267093/.minikube
	I1212 00:08:20.703452  302369 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 00:08:20.705175  302369 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:08:20.707175  302369 config.go:182] Loaded profile config "functional-931406": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 00:08:20.707718  302369 driver.go:394] Setting default libvirt URI to qemu:///system
	I1212 00:08:20.758613  302369 docker.go:123] docker version: linux-27.4.0:Docker Engine - Community
	I1212 00:08:20.758845  302369 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:08:20.843803  302369 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-12-12 00:08:20.832067976 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge-n
f-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
	I1212 00:08:20.843964  302369 docker.go:318] overlay module found
	I1212 00:08:20.845823  302369 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1212 00:08:20.847356  302369 start.go:297] selected driver: docker
	I1212 00:08:20.847397  302369 start.go:901] validating driver "docker" against &{Name:functional-931406 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-931406 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:08:20.847527  302369 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:08:20.849608  302369 out.go:201] 
	W1212 00:08:20.851040  302369 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1212 00:08:20.852424  302369 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-931406 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-931406 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-dlhn5" [42655c4c-5c96-40e3-88bf-62857ab7397f] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-dlhn5" [42655c4c-5c96-40e3-88bf-62857ab7397f] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.003522097s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:30752
functional_test.go:1675: http://192.168.49.2:30752: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-dlhn5

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30752
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.62s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [cf0c396c-adc1-48e1-8055-93ae5937253e] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004136369s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-931406 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-931406 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-931406 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-931406 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [8ee37537-f00e-432a-a79a-8e7fae1ad8e6] Pending
helpers_test.go:344: "sp-pod" [8ee37537-f00e-432a-a79a-8e7fae1ad8e6] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [8ee37537-f00e-432a-a79a-8e7fae1ad8e6] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.004006037s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-931406 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-931406 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-931406 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [18aa4b29-b882-47eb-86c5-e39493cb3c34] Pending
helpers_test.go:344: "sp-pod" [18aa4b29-b882-47eb-86c5-e39493cb3c34] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004464795s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-931406 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.80s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 ssh -n functional-931406 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 cp functional-931406:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd590692101/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 ssh -n functional-931406 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 ssh -n functional-931406 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.15s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/272599/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 ssh "sudo cat /etc/test/nested/copy/272599/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/272599.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 ssh "sudo cat /etc/ssl/certs/272599.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/272599.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 ssh "sudo cat /usr/share/ca-certificates/272599.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/2725992.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 ssh "sudo cat /etc/ssl/certs/2725992.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/2725992.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 ssh "sudo cat /usr/share/ca-certificates/2725992.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-931406 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-931406 ssh "sudo systemctl is-active docker": exit status 1 (423.659ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-931406 ssh "sudo systemctl is-active containerd": exit status 1 (529.773582ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-931406 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-931406 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-931406 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-931406 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 297668: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-931406 version -o=json --components: (1.233820777s)
--- PASS: TestFunctional/parallel/Version/components (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-931406 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-931406 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [ea0416ef-89bf-44f0-b512-cfff4b8d1a36] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
E1212 00:07:38.176946  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "nginx-svc" [ea0416ef-89bf-44f0-b512-cfff4b8d1a36] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.003593161s
I1212 00:07:47.839080  272599 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-931406 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.2
registry.k8s.io/kube-proxy:v1.31.2
registry.k8s.io/kube-controller-manager:v1.31.2
registry.k8s.io/kube-apiserver:v1.31.2
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-931406
localhost/kicbase/echo-server:functional-931406
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20241108-5c6d2daf
docker.io/kindest/kindnetd:v20241007-36f62932
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-931406 image ls --format short --alsologtostderr:
I1212 00:08:24.179495  302937 out.go:345] Setting OutFile to fd 1 ...
I1212 00:08:24.179684  302937 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1212 00:08:24.179712  302937 out.go:358] Setting ErrFile to fd 2...
I1212 00:08:24.179736  302937 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1212 00:08:24.180095  302937 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20083-267093/.minikube/bin
I1212 00:08:24.180795  302937 config.go:182] Loaded profile config "functional-931406": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1212 00:08:24.180965  302937 config.go:182] Loaded profile config "functional-931406": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1212 00:08:24.181493  302937 cli_runner.go:164] Run: docker container inspect functional-931406 --format={{.State.Status}}
I1212 00:08:24.202482  302937 ssh_runner.go:195] Run: systemctl --version
I1212 00:08:24.202536  302937 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-931406
I1212 00:08:24.221405  302937 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33095 SSHKeyPath:/home/jenkins/minikube-integration/20083-267093/.minikube/machines/functional-931406/id_rsa Username:docker}
I1212 00:08:24.314589  302937 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-931406 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| docker.io/kindest/kindnetd              | v20241108-5c6d2daf | 2be0bcf609c65 | 98.3MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| docker.io/kindest/kindnetd              | v20241007-36f62932 | 0bcd66b03df5f | 98.3MB |
| docker.io/library/nginx                 | alpine             | dba92e6b64886 | 58.3MB |
| localhost/minikube-local-cache-test     | functional-931406  | a9f4d570318f0 | 3.33kB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 27e3830e14027 | 140MB  |
| registry.k8s.io/kube-controller-manager | v1.31.2            | 9404aea098d9e | 87MB   |
| registry.k8s.io/kube-proxy              | v1.31.2            | 021d242013305 | 96MB   |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
| gcr.io/k8s-minikube/busybox             | latest             | 71a676dd070f4 | 1.63MB |
| localhost/kicbase/echo-server           | functional-931406  | ce2d2cda2d858 | 4.79MB |
| localhost/my-image                      | functional-931406  | 7d13d4ce04c4c | 1.64MB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | 2f6c962e7b831 | 61.6MB |
| registry.k8s.io/kube-apiserver          | v1.31.2            | f9c26480f1e72 | 92.6MB |
| registry.k8s.io/kube-scheduler          | v1.31.2            | d6b061e73ae45 | 67MB   |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
| docker.io/library/nginx                 | latest             | bdf62fd3a32f1 | 201MB  |
| registry.k8s.io/pause                   | 3.10               | afb61768ce381 | 520kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-931406 image ls --format table --alsologtostderr:
I1212 00:08:28.972936  303377 out.go:345] Setting OutFile to fd 1 ...
I1212 00:08:28.973122  303377 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1212 00:08:28.973141  303377 out.go:358] Setting ErrFile to fd 2...
I1212 00:08:28.973159  303377 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1212 00:08:28.973455  303377 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20083-267093/.minikube/bin
I1212 00:08:28.974098  303377 config.go:182] Loaded profile config "functional-931406": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1212 00:08:28.974275  303377 config.go:182] Loaded profile config "functional-931406": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1212 00:08:28.974752  303377 cli_runner.go:164] Run: docker container inspect functional-931406 --format={{.State.Status}}
I1212 00:08:28.994365  303377 ssh_runner.go:195] Run: systemctl --version
I1212 00:08:28.994415  303377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-931406
I1212 00:08:29.024932  303377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33095 SSHKeyPath:/home/jenkins/minikube-integration/20083-267093/.minikube/machines/functional-931406/id_rsa Username:docker}
I1212 00:08:29.139941  303377 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-931406 image ls --format json --alsologtostderr:
[{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:e50b7059b633caf3c1449b8da680d11845cda4506b513ee7a2de00725f0a34a7","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"519877"},{"id":"bdf62fd3a32f1209270ede068b6e08450dfe125c79b1a8ba8f5685090023bf7f","repoDigests":["docker.io/library/nginx@sha256:6d3e464bc399ce5b0cd6a165162deb5926803c1c0ae8a1983ba0a1982b97a7a2","docker.io/library/nginx@sha256:fb197595ebe76b9c0c14ab68159fd3c08bd067ec62300583543f0ebda353b5be"],"repoTags":["docker.io/library/nginx:latest"],"size":"201166247"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikub
e/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:31440a2bef59e2f1ffb600113b557103740ff851e27b0aef5b849f6e3ab994a6","registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61647114"},{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":["registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a","registry.k8s.io/etcd@sha256:e3ee3ca2dbaf511385000dbd54123629c71b6cfaabd469e658d76a116b7f43da"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139912446"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"2be0bcf609c6
573ee83e676c747f31bda661ab2d4e039c51839e38fd258d2903","repoDigests":["docker.io/kindest/kindnetd@sha256:de216f6245e142905c8022d424959a65f798fcd26f5b7492b9c0b0391d735c3e","docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108"],"repoTags":["docker.io/kindest/kindnetd:v20241108-5c6d2daf"],"size":"98274354"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["localhost/kicbase/echo-server:functional-931406"],"size":"4788229"},{"id":"9404aea098d9e80cb648d86c07d56130a1fe875ed7c2526251c2ae68a9bf07ba","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752","registry.k8s.io/kube-controller-manager@sha256:b8d51076af39954cadc718ae40bd8a736ae5ad4e0654465ae91886cad3a9b602"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.2"],"size":"86996294"}
,{"id":"021d2420133054f8835987db659750ff639ab6863776460264dd8025c06644ba","repoDigests":["registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe","registry.k8s.io/kube-proxy@sha256:adabb2ce69fab82e04b441902489c8dd06f47122f00bc1062189f3cf477c795a"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.2"],"size":"95952789"},{"id":"d6b061e73ae454743cbfe0e3479aa23e4ed65c61d38b4408a1e7f3d3859dda8a","repoDigests":["registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282","registry.k8s.io/kube-scheduler@sha256:38def311c8c2668b4b3820de83cd518e0d1c32cda10e661163f957a87f92ca34"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.2"],"size":"67007814"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"},{"id":"f9c26480f1e722a7d05d7
f1bb339180b19f941b23bcc928208e362df04a61270","repoDigests":["registry.k8s.io/kube-apiserver@sha256:8e7caee5c8075d84ee5b93472bedf9cf21364da1d72d60d3de15dfa0d172ff63","registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.2"],"size":"92632544"},{"id":"0bcd66b03df5f1498fba5b90226939f5993cfba4c8379438bd8e89f3b4a70baa","repoDigests":["docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387","docker.io/kindest/kindnetd@sha256:b61c0e5ba940299ee811efe946ee83e509799ea7e0651e1b782e83a665b29bae"],"repoTags":["docker.io/kindest/kindnetd:v20241007-36f62932"],"size":"98291250"},{"id":"dba92e6b6488643fe4f2e872e6e4f6c30948171890d0f2cb96f28c435352397f","repoDigests":["docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4","docker.io/library/nginx@sha256:eff2df9ac0ef6c949886d040dc2037ee6576d76161249261982fb70458ae8c26"],"repoTags":["docker.io/l
ibrary/nginx:alpine"],"size":"58293755"},{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1634527"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"7d13d4ce04c4c0172d14350ec36de891f9b1588d4d0b7e5d6d39004619d9d31a","repoDigests":["localhost/my-image@sha256:859cf6eef8286bf53f480e8072a8085ab7684753b0190302a223806b5cd6f0e3"],"repoTags":["localhost/my-image:functional-931
406"],"size":"1640226"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"6a7b3fd13d6c713f7d200fb8f14267ac43c1283e6aac10a64ac0442fa5e7fef4","repoDigests":["docker.io/library/b8045e763deb463cd82cf3f842c192de5ecefb302a11340563f47f0e98b931f2-tmp@sha256:e89dd5deaac5683fa03dd31c0e093e8df220b4a95308249d6badb4337800ff85"],"repoTags":[],"size":"1637644"},{"id":"a9f4d570318f0b40920106086247ad6e89c098660ec69251862ccbda4528d3ec","repoDigests":["localhost/minikube-local-cache-test@sha256:6e2143f056f289e5cdc01749b39a5dd131ca62084fd823b91b69758f57db9e40"],"repoTags":["localhost/minikube-local-cache-test:functional-931406"],"size":"3330"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":
["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-931406 image ls --format json --alsologtostderr:
I1212 00:08:28.683231  303345 out.go:345] Setting OutFile to fd 1 ...
I1212 00:08:28.683438  303345 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1212 00:08:28.683462  303345 out.go:358] Setting ErrFile to fd 2...
I1212 00:08:28.683494  303345 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1212 00:08:28.683938  303345 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20083-267093/.minikube/bin
I1212 00:08:28.684873  303345 config.go:182] Loaded profile config "functional-931406": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1212 00:08:28.685039  303345 config.go:182] Loaded profile config "functional-931406": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1212 00:08:28.686008  303345 cli_runner.go:164] Run: docker container inspect functional-931406 --format={{.State.Status}}
I1212 00:08:28.713766  303345 ssh_runner.go:195] Run: systemctl --version
I1212 00:08:28.713825  303345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-931406
I1212 00:08:28.746513  303345 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33095 SSHKeyPath:/home/jenkins/minikube-integration/20083-267093/.minikube/machines/functional-931406/id_rsa Username:docker}
I1212 00:08:28.838943  303345 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-931406 image ls --format yaml --alsologtostderr:
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 0bcd66b03df5f1498fba5b90226939f5993cfba4c8379438bd8e89f3b4a70baa
repoDigests:
- docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387
- docker.io/kindest/kindnetd@sha256:b61c0e5ba940299ee811efe946ee83e509799ea7e0651e1b782e83a665b29bae
repoTags:
- docker.io/kindest/kindnetd:v20241007-36f62932
size: "98291250"
- id: a9f4d570318f0b40920106086247ad6e89c098660ec69251862ccbda4528d3ec
repoDigests:
- localhost/minikube-local-cache-test@sha256:6e2143f056f289e5cdc01749b39a5dd131ca62084fd823b91b69758f57db9e40
repoTags:
- localhost/minikube-local-cache-test:functional-931406
size: "3330"
- id: 2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:31440a2bef59e2f1ffb600113b557103740ff851e27b0aef5b849f6e3ab994a6
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61647114"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: dba92e6b6488643fe4f2e872e6e4f6c30948171890d0f2cb96f28c435352397f
repoDigests:
- docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4
- docker.io/library/nginx@sha256:eff2df9ac0ef6c949886d040dc2037ee6576d76161249261982fb70458ae8c26
repoTags:
- docker.io/library/nginx:alpine
size: "58293755"
- id: bdf62fd3a32f1209270ede068b6e08450dfe125c79b1a8ba8f5685090023bf7f
repoDigests:
- docker.io/library/nginx@sha256:6d3e464bc399ce5b0cd6a165162deb5926803c1c0ae8a1983ba0a1982b97a7a2
- docker.io/library/nginx@sha256:fb197595ebe76b9c0c14ab68159fd3c08bd067ec62300583543f0ebda353b5be
repoTags:
- docker.io/library/nginx:latest
size: "201166247"
- id: f9c26480f1e722a7d05d7f1bb339180b19f941b23bcc928208e362df04a61270
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:8e7caee5c8075d84ee5b93472bedf9cf21364da1d72d60d3de15dfa0d172ff63
- registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.2
size: "92632544"
- id: 9404aea098d9e80cb648d86c07d56130a1fe875ed7c2526251c2ae68a9bf07ba
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752
- registry.k8s.io/kube-controller-manager@sha256:b8d51076af39954cadc718ae40bd8a736ae5ad4e0654465ae91886cad3a9b602
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.2
size: "86996294"
- id: 021d2420133054f8835987db659750ff639ab6863776460264dd8025c06644ba
repoDigests:
- registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe
- registry.k8s.io/kube-proxy@sha256:adabb2ce69fab82e04b441902489c8dd06f47122f00bc1062189f3cf477c795a
repoTags:
- registry.k8s.io/kube-proxy:v1.31.2
size: "95952789"
- id: d6b061e73ae454743cbfe0e3479aa23e4ed65c61d38b4408a1e7f3d3859dda8a
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282
- registry.k8s.io/kube-scheduler@sha256:38def311c8c2668b4b3820de83cd518e0d1c32cda10e661163f957a87f92ca34
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.2
size: "67007814"
- id: 2be0bcf609c6573ee83e676c747f31bda661ab2d4e039c51839e38fd258d2903
repoDigests:
- docker.io/kindest/kindnetd@sha256:de216f6245e142905c8022d424959a65f798fcd26f5b7492b9c0b0391d735c3e
- docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108
repoTags:
- docker.io/kindest/kindnetd:v20241108-5c6d2daf
size: "98274354"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- localhost/kicbase/echo-server:functional-931406
size: "4788229"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests:
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
- registry.k8s.io/etcd@sha256:e3ee3ca2dbaf511385000dbd54123629c71b6cfaabd469e658d76a116b7f43da
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139912446"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:e50b7059b633caf3c1449b8da680d11845cda4506b513ee7a2de00725f0a34a7
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "519877"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-931406 image ls --format yaml --alsologtostderr:
I1212 00:08:24.414889  302968 out.go:345] Setting OutFile to fd 1 ...
I1212 00:08:24.415063  302968 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1212 00:08:24.415075  302968 out.go:358] Setting ErrFile to fd 2...
I1212 00:08:24.415080  302968 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1212 00:08:24.415336  302968 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20083-267093/.minikube/bin
I1212 00:08:24.416079  302968 config.go:182] Loaded profile config "functional-931406": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1212 00:08:24.416219  302968 config.go:182] Loaded profile config "functional-931406": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1212 00:08:24.416710  302968 cli_runner.go:164] Run: docker container inspect functional-931406 --format={{.State.Status}}
I1212 00:08:24.434582  302968 ssh_runner.go:195] Run: systemctl --version
I1212 00:08:24.434641  302968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-931406
I1212 00:08:24.452171  302968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33095 SSHKeyPath:/home/jenkins/minikube-integration/20083-267093/.minikube/machines/functional-931406/id_rsa Username:docker}
I1212 00:08:24.550566  302968 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-931406 ssh pgrep buildkitd: exit status 1 (290.823914ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 image build -t localhost/my-image:functional-931406 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-931406 image build -t localhost/my-image:functional-931406 testdata/build --alsologtostderr: (3.366889441s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-arm64 -p functional-931406 image build -t localhost/my-image:functional-931406 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 6a7b3fd13d6
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-931406
--> 7d13d4ce04c
Successfully tagged localhost/my-image:functional-931406
7d13d4ce04c4c0172d14350ec36de891f9b1588d4d0b7e5d6d39004619d9d31a
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-931406 image build -t localhost/my-image:functional-931406 testdata/build --alsologtostderr:
I1212 00:08:24.959617  303095 out.go:345] Setting OutFile to fd 1 ...
I1212 00:08:24.960444  303095 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1212 00:08:24.960484  303095 out.go:358] Setting ErrFile to fd 2...
I1212 00:08:24.960509  303095 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1212 00:08:24.960803  303095 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20083-267093/.minikube/bin
I1212 00:08:24.961498  303095 config.go:182] Loaded profile config "functional-931406": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1212 00:08:24.962207  303095 config.go:182] Loaded profile config "functional-931406": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1212 00:08:24.962787  303095 cli_runner.go:164] Run: docker container inspect functional-931406 --format={{.State.Status}}
I1212 00:08:24.981877  303095 ssh_runner.go:195] Run: systemctl --version
I1212 00:08:24.981930  303095 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-931406
I1212 00:08:25.001634  303095 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33095 SSHKeyPath:/home/jenkins/minikube-integration/20083-267093/.minikube/machines/functional-931406/id_rsa Username:docker}
I1212 00:08:25.094958  303095 build_images.go:161] Building image from path: /tmp/build.401865693.tar
I1212 00:08:25.095059  303095 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1212 00:08:25.105720  303095 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.401865693.tar
I1212 00:08:25.109688  303095 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.401865693.tar: stat -c "%s %y" /var/lib/minikube/build/build.401865693.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.401865693.tar': No such file or directory
I1212 00:08:25.109726  303095 ssh_runner.go:362] scp /tmp/build.401865693.tar --> /var/lib/minikube/build/build.401865693.tar (3072 bytes)
I1212 00:08:25.137788  303095 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.401865693
I1212 00:08:25.147209  303095 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.401865693 -xf /var/lib/minikube/build/build.401865693.tar
I1212 00:08:25.157723  303095 crio.go:315] Building image: /var/lib/minikube/build/build.401865693
I1212 00:08:25.157799  303095 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-931406 /var/lib/minikube/build/build.401865693 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1212 00:08:28.229401  303095 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-931406 /var/lib/minikube/build/build.401865693 --cgroup-manager=cgroupfs: (3.071568948s)
I1212 00:08:28.229482  303095 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.401865693
I1212 00:08:28.240282  303095 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.401865693.tar
I1212 00:08:28.256743  303095 build_images.go:217] Built localhost/my-image:functional-931406 from /tmp/build.401865693.tar
I1212 00:08:28.256776  303095 build_images.go:133] succeeded building to: functional-931406
I1212 00:08:28.256782  303095 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-931406
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 image load --daemon kicbase/echo-server:functional-931406 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-931406 image load --daemon kicbase/echo-server:functional-931406 --alsologtostderr: (2.37401943s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 image load --daemon kicbase/echo-server:functional-931406 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-931406
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 image load --daemon kicbase/echo-server:functional-931406 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 image save kicbase/echo-server:functional-931406 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
E1212 00:07:43.298590  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 image rm kicbase/echo-server:functional-931406 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-931406
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 image save --daemon kicbase/echo-server:functional-931406 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-931406
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 update-context --alsologtostderr -v=2
2024/12/12 00:08:35 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-931406 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.106.179.219 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-931406 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-931406 /tmp/TestFunctionalparallelMountCmdany-port2852143202/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1733962068766314626" to /tmp/TestFunctionalparallelMountCmdany-port2852143202/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1733962068766314626" to /tmp/TestFunctionalparallelMountCmdany-port2852143202/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1733962068766314626" to /tmp/TestFunctionalparallelMountCmdany-port2852143202/001/test-1733962068766314626
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-931406 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (426.827802ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1212 00:07:49.193440  272599 retry.go:31] will retry after 435.228889ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 12 00:07 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 12 00:07 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 12 00:07 test-1733962068766314626
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 ssh cat /mount-9p/test-1733962068766314626
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-931406 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [fcd2c2ff-fdf3-4f81-bec2-f80381b2ec7d] Pending
helpers_test.go:344: "busybox-mount" [fcd2c2ff-fdf3-4f81-bec2-f80381b2ec7d] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
E1212 00:07:53.540656  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox-mount" [fcd2c2ff-fdf3-4f81-bec2-f80381b2ec7d] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [fcd2c2ff-fdf3-4f81-bec2-f80381b2ec7d] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.003922317s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-931406 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-931406 /tmp/TestFunctionalparallelMountCmdany-port2852143202/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.08s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-931406 /tmp/TestFunctionalparallelMountCmdspecific-port1754237518/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-931406 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (646.742581ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1212 00:07:58.491337  272599 retry.go:31] will retry after 284.936761ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-931406 /tmp/TestFunctionalparallelMountCmdspecific-port1754237518/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-931406 ssh "sudo umount -f /mount-9p": exit status 1 (378.3683ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-931406 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-931406 /tmp/TestFunctionalparallelMountCmdspecific-port1754237518/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-931406 /tmp/TestFunctionalparallelMountCmdVerifyCleanup786397211/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-931406 /tmp/TestFunctionalparallelMountCmdVerifyCleanup786397211/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-931406 /tmp/TestFunctionalparallelMountCmdVerifyCleanup786397211/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-931406 ssh "findmnt -T" /mount1: exit status 1 (910.157752ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1212 00:08:01.185477  272599 retry.go:31] will retry after 624.489878ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-931406 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-931406 /tmp/TestFunctionalparallelMountCmdVerifyCleanup786397211/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-931406 /tmp/TestFunctionalparallelMountCmdVerifyCleanup786397211/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-931406 /tmp/TestFunctionalparallelMountCmdVerifyCleanup786397211/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.70s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-931406 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-931406 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-r49tj" [6a653fec-65f3-4109-89ea-3b4d9b61dac2] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
E1212 00:08:14.022646  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "hello-node-64b4f8f9ff-r49tj" [6a653fec-65f3-4109-89ea-3b4d9b61dac2] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.004331567s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "353.214283ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "69.217756ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "357.34098ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "52.727445ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 service list -o json
functional_test.go:1494: Took "647.414139ms" to run "out/minikube-linux-arm64 -p functional-931406 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:30331
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-931406 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:30331
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.59s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-931406
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-931406
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-931406
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (175.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-005401 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E1212 00:08:54.984118  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/client.crt: no such file or directory" logger="UnhandledError"
E1212 00:10:16.905940  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-005401 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (2m55.130850123s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-005401 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (175.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (9.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-005401 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-005401 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-005401 -- rollout status deployment/busybox: (6.095944544s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-005401 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-005401 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-005401 -- exec busybox-7dff88458-4f6t4 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-005401 -- exec busybox-7dff88458-qdc47 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-005401 -- exec busybox-7dff88458-wbhmk -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-005401 -- exec busybox-7dff88458-4f6t4 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-005401 -- exec busybox-7dff88458-qdc47 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-005401 -- exec busybox-7dff88458-wbhmk -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-005401 -- exec busybox-7dff88458-4f6t4 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-005401 -- exec busybox-7dff88458-qdc47 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-005401 -- exec busybox-7dff88458-wbhmk -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (9.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-005401 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-005401 -- exec busybox-7dff88458-4f6t4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-005401 -- exec busybox-7dff88458-4f6t4 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-005401 -- exec busybox-7dff88458-qdc47 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-005401 -- exec busybox-7dff88458-qdc47 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-005401 -- exec busybox-7dff88458-wbhmk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-005401 -- exec busybox-7dff88458-wbhmk -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (35.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-005401 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-005401 -v=7 --alsologtostderr: (34.137931889s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-005401 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-005401 status -v=7 --alsologtostderr: (1.0046838s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (35.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-005401 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.025681278s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (18.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-005401 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-005401 cp testdata/cp-test.txt ha-005401:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-005401 ssh -n ha-005401 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-005401 cp ha-005401:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2949735802/001/cp-test_ha-005401.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-005401 ssh -n ha-005401 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-005401 cp ha-005401:/home/docker/cp-test.txt ha-005401-m02:/home/docker/cp-test_ha-005401_ha-005401-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-005401 ssh -n ha-005401 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-005401 ssh -n ha-005401-m02 "sudo cat /home/docker/cp-test_ha-005401_ha-005401-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-005401 cp ha-005401:/home/docker/cp-test.txt ha-005401-m03:/home/docker/cp-test_ha-005401_ha-005401-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-005401 ssh -n ha-005401 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-005401 ssh -n ha-005401-m03 "sudo cat /home/docker/cp-test_ha-005401_ha-005401-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-005401 cp ha-005401:/home/docker/cp-test.txt ha-005401-m04:/home/docker/cp-test_ha-005401_ha-005401-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-005401 ssh -n ha-005401 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-005401 ssh -n ha-005401-m04 "sudo cat /home/docker/cp-test_ha-005401_ha-005401-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-005401 cp testdata/cp-test.txt ha-005401-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-005401 ssh -n ha-005401-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-005401 cp ha-005401-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2949735802/001/cp-test_ha-005401-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-005401 ssh -n ha-005401-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-005401 cp ha-005401-m02:/home/docker/cp-test.txt ha-005401:/home/docker/cp-test_ha-005401-m02_ha-005401.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-005401 ssh -n ha-005401-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-005401 ssh -n ha-005401 "sudo cat /home/docker/cp-test_ha-005401-m02_ha-005401.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-005401 cp ha-005401-m02:/home/docker/cp-test.txt ha-005401-m03:/home/docker/cp-test_ha-005401-m02_ha-005401-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-005401 ssh -n ha-005401-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-005401 ssh -n ha-005401-m03 "sudo cat /home/docker/cp-test_ha-005401-m02_ha-005401-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-005401 cp ha-005401-m02:/home/docker/cp-test.txt ha-005401-m04:/home/docker/cp-test_ha-005401-m02_ha-005401-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-005401 ssh -n ha-005401-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-005401 ssh -n ha-005401-m04 "sudo cat /home/docker/cp-test_ha-005401-m02_ha-005401-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-005401 cp testdata/cp-test.txt ha-005401-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-005401 ssh -n ha-005401-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-005401 cp ha-005401-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2949735802/001/cp-test_ha-005401-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-005401 ssh -n ha-005401-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-005401 cp ha-005401-m03:/home/docker/cp-test.txt ha-005401:/home/docker/cp-test_ha-005401-m03_ha-005401.txt
E1212 00:12:33.044391  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-005401 ssh -n ha-005401-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-005401 ssh -n ha-005401 "sudo cat /home/docker/cp-test_ha-005401-m03_ha-005401.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-005401 cp ha-005401-m03:/home/docker/cp-test.txt ha-005401-m02:/home/docker/cp-test_ha-005401-m03_ha-005401-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-005401 ssh -n ha-005401-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-005401 ssh -n ha-005401-m02 "sudo cat /home/docker/cp-test_ha-005401-m03_ha-005401-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-005401 cp ha-005401-m03:/home/docker/cp-test.txt ha-005401-m04:/home/docker/cp-test_ha-005401-m03_ha-005401-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-005401 ssh -n ha-005401-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-005401 ssh -n ha-005401-m04 "sudo cat /home/docker/cp-test_ha-005401-m03_ha-005401-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-005401 cp testdata/cp-test.txt ha-005401-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-005401 ssh -n ha-005401-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-005401 cp ha-005401-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2949735802/001/cp-test_ha-005401-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-005401 ssh -n ha-005401-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-005401 cp ha-005401-m04:/home/docker/cp-test.txt ha-005401:/home/docker/cp-test_ha-005401-m04_ha-005401.txt
E1212 00:12:37.352058  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/functional-931406/client.crt: no such file or directory" logger="UnhandledError"
E1212 00:12:37.360070  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/functional-931406/client.crt: no such file or directory" logger="UnhandledError"
E1212 00:12:37.372991  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/functional-931406/client.crt: no such file or directory" logger="UnhandledError"
E1212 00:12:37.394449  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/functional-931406/client.crt: no such file or directory" logger="UnhandledError"
E1212 00:12:37.435790  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/functional-931406/client.crt: no such file or directory" logger="UnhandledError"
E1212 00:12:37.517159  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/functional-931406/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-005401 ssh -n ha-005401-m04 "sudo cat /home/docker/cp-test.txt"
E1212 00:12:37.679084  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/functional-931406/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-005401 ssh -n ha-005401 "sudo cat /home/docker/cp-test_ha-005401-m04_ha-005401.txt"
E1212 00:12:38.000802  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/functional-931406/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-005401 cp ha-005401-m04:/home/docker/cp-test.txt ha-005401-m02:/home/docker/cp-test_ha-005401-m04_ha-005401-m02.txt
E1212 00:12:38.642302  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/functional-931406/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-005401 ssh -n ha-005401-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-005401 ssh -n ha-005401-m02 "sudo cat /home/docker/cp-test_ha-005401-m04_ha-005401-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-005401 cp ha-005401-m04:/home/docker/cp-test.txt ha-005401-m03:/home/docker/cp-test_ha-005401-m04_ha-005401-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-005401 ssh -n ha-005401-m04 "sudo cat /home/docker/cp-test.txt"
E1212 00:12:39.932683  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/functional-931406/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-005401 ssh -n ha-005401-m03 "sudo cat /home/docker/cp-test_ha-005401-m04_ha-005401-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (18.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-005401 node stop m02 -v=7 --alsologtostderr
E1212 00:12:42.495381  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/functional-931406/client.crt: no such file or directory" logger="UnhandledError"
E1212 00:12:47.617575  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/functional-931406/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-005401 node stop m02 -v=7 --alsologtostderr: (11.970539336s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-005401 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-005401 status -v=7 --alsologtostderr: exit status 7 (744.587849ms)

                                                
                                                
-- stdout --
	ha-005401
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-005401-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-005401-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-005401-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:12:52.436818  319242 out.go:345] Setting OutFile to fd 1 ...
	I1212 00:12:52.437043  319242 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1212 00:12:52.437073  319242 out.go:358] Setting ErrFile to fd 2...
	I1212 00:12:52.437097  319242 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1212 00:12:52.437376  319242 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20083-267093/.minikube/bin
	I1212 00:12:52.437585  319242 out.go:352] Setting JSON to false
	I1212 00:12:52.437641  319242 mustload.go:65] Loading cluster: ha-005401
	I1212 00:12:52.437680  319242 notify.go:220] Checking for updates...
	I1212 00:12:52.438184  319242 config.go:182] Loaded profile config "ha-005401": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 00:12:52.438221  319242 status.go:174] checking status of ha-005401 ...
	I1212 00:12:52.438773  319242 cli_runner.go:164] Run: docker container inspect ha-005401 --format={{.State.Status}}
	I1212 00:12:52.459363  319242 status.go:371] ha-005401 host status = "Running" (err=<nil>)
	I1212 00:12:52.459390  319242 host.go:66] Checking if "ha-005401" exists ...
	I1212 00:12:52.459710  319242 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-005401
	I1212 00:12:52.485845  319242 host.go:66] Checking if "ha-005401" exists ...
	I1212 00:12:52.486134  319242 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:12:52.486253  319242 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-005401
	I1212 00:12:52.505896  319242 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33100 SSHKeyPath:/home/jenkins/minikube-integration/20083-267093/.minikube/machines/ha-005401/id_rsa Username:docker}
	I1212 00:12:52.599945  319242 ssh_runner.go:195] Run: systemctl --version
	I1212 00:12:52.604549  319242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:12:52.616634  319242 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:12:52.680166  319242 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:48 OomKillDisable:true NGoroutines:71 SystemTime:2024-12-12 00:12:52.670417761 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge-n
f-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
	I1212 00:12:52.680761  319242 kubeconfig.go:125] found "ha-005401" server: "https://192.168.49.254:8443"
	I1212 00:12:52.680796  319242 api_server.go:166] Checking apiserver status ...
	I1212 00:12:52.680841  319242 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:12:52.693479  319242 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1435/cgroup
	I1212 00:12:52.704486  319242 api_server.go:182] apiserver freezer: "12:freezer:/docker/af3b348fe68fe77498f8795ea931d97dc6ff0b2945bbaf9a8602fb9603bc90b7/crio/crio-93f3cc9f27a5a3fc0b9a86665faaf28997220bf4b3b59f5235636d3396a90d68"
	I1212 00:12:52.704551  319242 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/af3b348fe68fe77498f8795ea931d97dc6ff0b2945bbaf9a8602fb9603bc90b7/crio/crio-93f3cc9f27a5a3fc0b9a86665faaf28997220bf4b3b59f5235636d3396a90d68/freezer.state
	I1212 00:12:52.714997  319242 api_server.go:204] freezer state: "THAWED"
	I1212 00:12:52.715023  319242 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1212 00:12:52.723439  319242 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1212 00:12:52.723467  319242 status.go:463] ha-005401 apiserver status = Running (err=<nil>)
	I1212 00:12:52.723482  319242 status.go:176] ha-005401 status: &{Name:ha-005401 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 00:12:52.723500  319242 status.go:174] checking status of ha-005401-m02 ...
	I1212 00:12:52.723796  319242 cli_runner.go:164] Run: docker container inspect ha-005401-m02 --format={{.State.Status}}
	I1212 00:12:52.742993  319242 status.go:371] ha-005401-m02 host status = "Stopped" (err=<nil>)
	I1212 00:12:52.743017  319242 status.go:384] host is not running, skipping remaining checks
	I1212 00:12:52.743024  319242 status.go:176] ha-005401-m02 status: &{Name:ha-005401-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 00:12:52.743046  319242 status.go:174] checking status of ha-005401-m03 ...
	I1212 00:12:52.743354  319242 cli_runner.go:164] Run: docker container inspect ha-005401-m03 --format={{.State.Status}}
	I1212 00:12:52.760808  319242 status.go:371] ha-005401-m03 host status = "Running" (err=<nil>)
	I1212 00:12:52.760835  319242 host.go:66] Checking if "ha-005401-m03" exists ...
	I1212 00:12:52.761282  319242 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-005401-m03
	I1212 00:12:52.779237  319242 host.go:66] Checking if "ha-005401-m03" exists ...
	I1212 00:12:52.779572  319242 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:12:52.779620  319242 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-005401-m03
	I1212 00:12:52.797652  319242 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33111 SSHKeyPath:/home/jenkins/minikube-integration/20083-267093/.minikube/machines/ha-005401-m03/id_rsa Username:docker}
	I1212 00:12:52.887701  319242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:12:52.900659  319242 kubeconfig.go:125] found "ha-005401" server: "https://192.168.49.254:8443"
	I1212 00:12:52.900685  319242 api_server.go:166] Checking apiserver status ...
	I1212 00:12:52.900725  319242 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:12:52.913065  319242 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1303/cgroup
	I1212 00:12:52.923080  319242 api_server.go:182] apiserver freezer: "12:freezer:/docker/0ea7063b33227d659f5abbbc4ba278091a56caf09cef60e77e1459c97ed43de2/crio/crio-309ad45d3f20e2584b15e90e989a55d8d2b5e08037b4a171f85a8a95a1b5f598"
	I1212 00:12:52.923198  319242 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/0ea7063b33227d659f5abbbc4ba278091a56caf09cef60e77e1459c97ed43de2/crio/crio-309ad45d3f20e2584b15e90e989a55d8d2b5e08037b4a171f85a8a95a1b5f598/freezer.state
	I1212 00:12:52.939697  319242 api_server.go:204] freezer state: "THAWED"
	I1212 00:12:52.939772  319242 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1212 00:12:52.948528  319242 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1212 00:12:52.948571  319242 status.go:463] ha-005401-m03 apiserver status = Running (err=<nil>)
	I1212 00:12:52.948602  319242 status.go:176] ha-005401-m03 status: &{Name:ha-005401-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 00:12:52.948623  319242 status.go:174] checking status of ha-005401-m04 ...
	I1212 00:12:52.948960  319242 cli_runner.go:164] Run: docker container inspect ha-005401-m04 --format={{.State.Status}}
	I1212 00:12:52.971933  319242 status.go:371] ha-005401-m04 host status = "Running" (err=<nil>)
	I1212 00:12:52.971963  319242 host.go:66] Checking if "ha-005401-m04" exists ...
	I1212 00:12:52.972265  319242 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-005401-m04
	I1212 00:12:52.991540  319242 host.go:66] Checking if "ha-005401-m04" exists ...
	I1212 00:12:52.991860  319242 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:12:52.991905  319242 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-005401-m04
	I1212 00:12:53.009231  319242 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33116 SSHKeyPath:/home/jenkins/minikube-integration/20083-267093/.minikube/machines/ha-005401-m04/id_rsa Username:docker}
	I1212 00:12:53.102927  319242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:12:53.117733  319242 status.go:176] ha-005401-m04 status: &{Name:ha-005401-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (33.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-005401 node start m02 -v=7 --alsologtostderr
E1212 00:12:57.859865  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/functional-931406/client.crt: no such file or directory" logger="UnhandledError"
E1212 00:13:00.748066  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/client.crt: no such file or directory" logger="UnhandledError"
E1212 00:13:18.341962  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/functional-931406/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-005401 node start m02 -v=7 --alsologtostderr: (31.91637837s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-005401 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-005401 status -v=7 --alsologtostderr: (1.322141179s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (33.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.294646488s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (154.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-005401 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-005401 -v=7 --alsologtostderr
E1212 00:13:59.304317  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/functional-931406/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 stop -p ha-005401 -v=7 --alsologtostderr: (36.960493772s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 start -p ha-005401 --wait=true -v=7 --alsologtostderr
E1212 00:15:21.225778  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/functional-931406/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 start -p ha-005401 --wait=true -v=7 --alsologtostderr: (1m57.37275311s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-005401
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (154.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-005401 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-005401 node delete m03 -v=7 --alsologtostderr: (11.559435912s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-005401 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-005401 stop -v=7 --alsologtostderr
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-005401 stop -v=7 --alsologtostderr: (35.52765606s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-005401 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-005401 status -v=7 --alsologtostderr: exit status 7 (130.142087ms)

                                                
                                                
-- stdout --
	ha-005401
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-005401-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-005401-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:16:52.002614  333169 out.go:345] Setting OutFile to fd 1 ...
	I1212 00:16:52.002815  333169 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1212 00:16:52.002843  333169 out.go:358] Setting ErrFile to fd 2...
	I1212 00:16:52.002873  333169 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1212 00:16:52.003250  333169 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20083-267093/.minikube/bin
	I1212 00:16:52.003522  333169 out.go:352] Setting JSON to false
	I1212 00:16:52.003566  333169 mustload.go:65] Loading cluster: ha-005401
	I1212 00:16:52.004026  333169 notify.go:220] Checking for updates...
	I1212 00:16:52.004452  333169 config.go:182] Loaded profile config "ha-005401": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 00:16:52.004956  333169 status.go:174] checking status of ha-005401 ...
	I1212 00:16:52.005649  333169 cli_runner.go:164] Run: docker container inspect ha-005401 --format={{.State.Status}}
	I1212 00:16:52.029122  333169 status.go:371] ha-005401 host status = "Stopped" (err=<nil>)
	I1212 00:16:52.029149  333169 status.go:384] host is not running, skipping remaining checks
	I1212 00:16:52.029156  333169 status.go:176] ha-005401 status: &{Name:ha-005401 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 00:16:52.029184  333169 status.go:174] checking status of ha-005401-m02 ...
	I1212 00:16:52.029510  333169 cli_runner.go:164] Run: docker container inspect ha-005401-m02 --format={{.State.Status}}
	I1212 00:16:52.048417  333169 status.go:371] ha-005401-m02 host status = "Stopped" (err=<nil>)
	I1212 00:16:52.048441  333169 status.go:384] host is not running, skipping remaining checks
	I1212 00:16:52.048448  333169 status.go:176] ha-005401-m02 status: &{Name:ha-005401-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 00:16:52.048470  333169 status.go:174] checking status of ha-005401-m04 ...
	I1212 00:16:52.048799  333169 cli_runner.go:164] Run: docker container inspect ha-005401-m04 --format={{.State.Status}}
	I1212 00:16:52.079178  333169 status.go:371] ha-005401-m04 host status = "Stopped" (err=<nil>)
	I1212 00:16:52.079205  333169 status.go:384] host is not running, skipping remaining checks
	I1212 00:16:52.079212  333169 status.go:176] ha-005401-m04 status: &{Name:ha-005401-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (100.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 start -p ha-005401 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E1212 00:17:33.044874  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/client.crt: no such file or directory" logger="UnhandledError"
E1212 00:17:37.352101  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/functional-931406/client.crt: no such file or directory" logger="UnhandledError"
E1212 00:18:05.067979  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/functional-931406/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 start -p ha-005401 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (1m39.410435569s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-005401 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (100.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (74.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-005401 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 node add -p ha-005401 --control-plane -v=7 --alsologtostderr: (1m13.257110653s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-005401 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (74.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.076747108s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.08s)

                                                
                                    
x
+
TestJSONOutput/start/Command (49.7s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-442912 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-442912 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (49.694387243s)
--- PASS: TestJSONOutput/start/Command (49.70s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.76s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-442912 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.76s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.69s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-442912 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.69s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.94s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-442912 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-442912 --output=json --user=testUser: (5.941591249s)
--- PASS: TestJSONOutput/stop/Command (5.94s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-866297 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-866297 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (78.520143ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"780b9bb8-11d8-496d-933d-b32599d6b636","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-866297] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"48870b75-91cc-4ab2-bf37-f0ac22cac01a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20083"}}
	{"specversion":"1.0","id":"560aacef-5891-4a62-8b93-37fd5450874a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"13335e7c-eda4-498c-8195-dcbcc66a7ca3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20083-267093/kubeconfig"}}
	{"specversion":"1.0","id":"ed735f4a-3365-43a2-b370-88ad792902df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20083-267093/.minikube"}}
	{"specversion":"1.0","id":"d7df494a-724e-4feb-bf52-77e688ff5d12","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"5cceabde-3cee-4833-b41a-9e383bf5e8a5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"04c05824-bf81-4f7c-b6c8-370f083e80b8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-866297" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-866297
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (38.47s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-192104 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-192104 --network=: (36.286089502s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-192104" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-192104
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-192104: (2.164006775s)
--- PASS: TestKicCustomNetwork/create_custom_network (38.47s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (34.41s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-731958 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-731958 --network=bridge: (32.37251168s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-731958" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-731958
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-731958: (2.006566983s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (34.41s)

                                                
                                    
x
+
TestKicExistingNetwork (34.21s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1212 00:22:11.318565  272599 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1212 00:22:11.337754  272599 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1212 00:22:11.337839  272599 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1212 00:22:11.337856  272599 cli_runner.go:164] Run: docker network inspect existing-network
W1212 00:22:11.356079  272599 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1212 00:22:11.356127  272599 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1212 00:22:11.356161  272599 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1212 00:22:11.356503  272599 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1212 00:22:11.376739  272599 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-6076c9b7c9e2 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:69:45:aa:86} reservation:<nil>}
I1212 00:22:11.377808  272599 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001e03500}
I1212 00:22:11.377863  272599 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1212 00:22:11.377980  272599 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1212 00:22:11.448689  272599 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-716295 --network=existing-network
E1212 00:22:33.051434  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/client.crt: no such file or directory" logger="UnhandledError"
E1212 00:22:37.354269  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/functional-931406/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-716295 --network=existing-network: (31.932506057s)
helpers_test.go:175: Cleaning up "existing-network-716295" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-716295
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-716295: (2.107964395s)
I1212 00:22:45.506395  272599 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (34.21s)

                                                
                                    
x
+
TestKicCustomSubnet (34.46s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-082728 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-082728 --subnet=192.168.60.0/24: (32.228510098s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-082728 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-082728" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-082728
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-082728: (2.200815636s)
--- PASS: TestKicCustomSubnet (34.46s)

                                                
                                    
x
+
TestKicStaticIP (34.15s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-781925 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-781925 --static-ip=192.168.200.200: (31.880294889s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-781925 ip
helpers_test.go:175: Cleaning up "static-ip-781925" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-781925
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-781925: (2.107826906s)
--- PASS: TestKicStaticIP (34.15s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (70.79s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-448295 --driver=docker  --container-runtime=crio
E1212 00:23:56.110275  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-448295 --driver=docker  --container-runtime=crio: (31.121149695s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-450953 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-450953 --driver=docker  --container-runtime=crio: (33.597065558s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-448295
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-450953
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-450953" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-450953
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-450953: (2.070471472s)
helpers_test.go:175: Cleaning up "first-448295" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-448295
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-448295: (2.312173282s)
--- PASS: TestMinikubeProfile (70.79s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.47s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-666074 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-666074 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.467498542s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.47s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-666074 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.39s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-667920 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-667920 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.389673456s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.39s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-667920 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.65s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-666074 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-666074 --alsologtostderr -v=5: (1.645532911s)
--- PASS: TestMountStart/serial/DeleteFirst (1.65s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-667920 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-667920
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-667920: (1.204252872s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.93s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-667920
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-667920: (6.925742377s)
--- PASS: TestMountStart/serial/RestartStopped (7.93s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-667920 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (78.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-255093 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-255093 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m18.24204347s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255093 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (78.74s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (7.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-255093 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-255093 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-255093 -- rollout status deployment/busybox: (5.584137728s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-255093 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-255093 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-255093 -- exec busybox-7dff88458-6qtwh -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-255093 -- exec busybox-7dff88458-ksxqx -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-255093 -- exec busybox-7dff88458-6qtwh -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-255093 -- exec busybox-7dff88458-ksxqx -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-255093 -- exec busybox-7dff88458-6qtwh -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-255093 -- exec busybox-7dff88458-ksxqx -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (7.70s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-255093 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-255093 -- exec busybox-7dff88458-6qtwh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-255093 -- exec busybox-7dff88458-6qtwh -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-255093 -- exec busybox-7dff88458-ksxqx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-255093 -- exec busybox-7dff88458-ksxqx -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.06s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (28.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-255093 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-255093 -v 3 --alsologtostderr: (27.681003241s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255093 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (28.39s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-255093 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.69s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255093 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255093 cp testdata/cp-test.txt multinode-255093:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255093 ssh -n multinode-255093 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255093 cp multinode-255093:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2008017065/001/cp-test_multinode-255093.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255093 ssh -n multinode-255093 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255093 cp multinode-255093:/home/docker/cp-test.txt multinode-255093-m02:/home/docker/cp-test_multinode-255093_multinode-255093-m02.txt
E1212 00:27:33.045367  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255093 ssh -n multinode-255093 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255093 ssh -n multinode-255093-m02 "sudo cat /home/docker/cp-test_multinode-255093_multinode-255093-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255093 cp multinode-255093:/home/docker/cp-test.txt multinode-255093-m03:/home/docker/cp-test_multinode-255093_multinode-255093-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255093 ssh -n multinode-255093 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255093 ssh -n multinode-255093-m03 "sudo cat /home/docker/cp-test_multinode-255093_multinode-255093-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255093 cp testdata/cp-test.txt multinode-255093-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255093 ssh -n multinode-255093-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255093 cp multinode-255093-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2008017065/001/cp-test_multinode-255093-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255093 ssh -n multinode-255093-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255093 cp multinode-255093-m02:/home/docker/cp-test.txt multinode-255093:/home/docker/cp-test_multinode-255093-m02_multinode-255093.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255093 ssh -n multinode-255093-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255093 ssh -n multinode-255093 "sudo cat /home/docker/cp-test_multinode-255093-m02_multinode-255093.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255093 cp multinode-255093-m02:/home/docker/cp-test.txt multinode-255093-m03:/home/docker/cp-test_multinode-255093-m02_multinode-255093-m03.txt
E1212 00:27:37.351823  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/functional-931406/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255093 ssh -n multinode-255093-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255093 ssh -n multinode-255093-m03 "sudo cat /home/docker/cp-test_multinode-255093-m02_multinode-255093-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255093 cp testdata/cp-test.txt multinode-255093-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255093 ssh -n multinode-255093-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255093 cp multinode-255093-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2008017065/001/cp-test_multinode-255093-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255093 ssh -n multinode-255093-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255093 cp multinode-255093-m03:/home/docker/cp-test.txt multinode-255093:/home/docker/cp-test_multinode-255093-m03_multinode-255093.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255093 ssh -n multinode-255093-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255093 ssh -n multinode-255093 "sudo cat /home/docker/cp-test_multinode-255093-m03_multinode-255093.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255093 cp multinode-255093-m03:/home/docker/cp-test.txt multinode-255093-m02:/home/docker/cp-test_multinode-255093-m03_multinode-255093-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255093 ssh -n multinode-255093-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255093 ssh -n multinode-255093-m02 "sudo cat /home/docker/cp-test_multinode-255093-m03_multinode-255093-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.11s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255093 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-255093 node stop m03: (1.197696583s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255093 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-255093 status: exit status 7 (525.597683ms)

                                                
                                                
-- stdout --
	multinode-255093
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-255093-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-255093-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255093 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-255093 status --alsologtostderr: exit status 7 (528.262199ms)

                                                
                                                
-- stdout --
	multinode-255093
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-255093-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-255093-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:27:43.092207  386977 out.go:345] Setting OutFile to fd 1 ...
	I1212 00:27:43.092429  386977 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1212 00:27:43.092455  386977 out.go:358] Setting ErrFile to fd 2...
	I1212 00:27:43.092474  386977 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1212 00:27:43.092849  386977 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20083-267093/.minikube/bin
	I1212 00:27:43.093556  386977 out.go:352] Setting JSON to false
	I1212 00:27:43.093618  386977 mustload.go:65] Loading cluster: multinode-255093
	I1212 00:27:43.093797  386977 notify.go:220] Checking for updates...
	I1212 00:27:43.094186  386977 config.go:182] Loaded profile config "multinode-255093": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 00:27:43.094222  386977 status.go:174] checking status of multinode-255093 ...
	I1212 00:27:43.094835  386977 cli_runner.go:164] Run: docker container inspect multinode-255093 --format={{.State.Status}}
	I1212 00:27:43.115948  386977 status.go:371] multinode-255093 host status = "Running" (err=<nil>)
	I1212 00:27:43.115971  386977 host.go:66] Checking if "multinode-255093" exists ...
	I1212 00:27:43.116288  386977 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-255093
	I1212 00:27:43.140031  386977 host.go:66] Checking if "multinode-255093" exists ...
	I1212 00:27:43.140327  386977 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:27:43.140377  386977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-255093
	I1212 00:27:43.158027  386977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33221 SSHKeyPath:/home/jenkins/minikube-integration/20083-267093/.minikube/machines/multinode-255093/id_rsa Username:docker}
	I1212 00:27:43.251679  386977 ssh_runner.go:195] Run: systemctl --version
	I1212 00:27:43.255895  386977 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:27:43.267519  386977 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:27:43.320462  386977 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-12-12 00:27:43.311025848 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge-n
f-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
	I1212 00:27:43.321124  386977 kubeconfig.go:125] found "multinode-255093" server: "https://192.168.67.2:8443"
	I1212 00:27:43.321162  386977 api_server.go:166] Checking apiserver status ...
	I1212 00:27:43.321211  386977 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:27:43.332949  386977 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1364/cgroup
	I1212 00:27:43.343014  386977 api_server.go:182] apiserver freezer: "12:freezer:/docker/be7d3b244a86677f763f431922b329f57040fb448b19e6d508e9e3e3d341ccb5/crio/crio-cf3552a7b986e252c6ead7eac84aa70c55fd1420c330e1572badb46c8c307a34"
	I1212 00:27:43.343082  386977 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/be7d3b244a86677f763f431922b329f57040fb448b19e6d508e9e3e3d341ccb5/crio/crio-cf3552a7b986e252c6ead7eac84aa70c55fd1420c330e1572badb46c8c307a34/freezer.state
	I1212 00:27:43.355633  386977 api_server.go:204] freezer state: "THAWED"
	I1212 00:27:43.355725  386977 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1212 00:27:43.364747  386977 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1212 00:27:43.364776  386977 status.go:463] multinode-255093 apiserver status = Running (err=<nil>)
	I1212 00:27:43.364787  386977 status.go:176] multinode-255093 status: &{Name:multinode-255093 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 00:27:43.364835  386977 status.go:174] checking status of multinode-255093-m02 ...
	I1212 00:27:43.365179  386977 cli_runner.go:164] Run: docker container inspect multinode-255093-m02 --format={{.State.Status}}
	I1212 00:27:43.382729  386977 status.go:371] multinode-255093-m02 host status = "Running" (err=<nil>)
	I1212 00:27:43.382756  386977 host.go:66] Checking if "multinode-255093-m02" exists ...
	I1212 00:27:43.383252  386977 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-255093-m02
	I1212 00:27:43.400430  386977 host.go:66] Checking if "multinode-255093-m02" exists ...
	I1212 00:27:43.400893  386977 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:27:43.400944  386977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-255093-m02
	I1212 00:27:43.418773  386977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33226 SSHKeyPath:/home/jenkins/minikube-integration/20083-267093/.minikube/machines/multinode-255093-m02/id_rsa Username:docker}
	I1212 00:27:43.511295  386977 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:27:43.522782  386977 status.go:176] multinode-255093-m02 status: &{Name:multinode-255093-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1212 00:27:43.522818  386977 status.go:174] checking status of multinode-255093-m03 ...
	I1212 00:27:43.523124  386977 cli_runner.go:164] Run: docker container inspect multinode-255093-m03 --format={{.State.Status}}
	I1212 00:27:43.549400  386977 status.go:371] multinode-255093-m03 host status = "Stopped" (err=<nil>)
	I1212 00:27:43.549433  386977 status.go:384] host is not running, skipping remaining checks
	I1212 00:27:43.549440  386977 status.go:176] multinode-255093-m03 status: &{Name:multinode-255093-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.25s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255093 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-255093 node start m03 -v=7 --alsologtostderr: (9.310074245s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255093 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.06s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (79.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-255093
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-255093
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-255093: (24.750941855s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-255093 --wait=true -v=8 --alsologtostderr
E1212 00:29:00.429846  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/functional-931406/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-255093 --wait=true -v=8 --alsologtostderr: (55.09853118s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-255093
--- PASS: TestMultiNode/serial/RestartKeepsNodes (79.98s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255093 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-255093 node delete m03: (4.651970309s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255093 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.32s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255093 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-255093 stop: (23.611305291s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255093 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-255093 status: exit status 7 (117.490582ms)

                                                
                                                
-- stdout --
	multinode-255093
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-255093-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255093 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-255093 status --alsologtostderr: exit status 7 (99.922764ms)

                                                
                                                
-- stdout --
	multinode-255093
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-255093-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:29:42.701673  394399 out.go:345] Setting OutFile to fd 1 ...
	I1212 00:29:42.701800  394399 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1212 00:29:42.701809  394399 out.go:358] Setting ErrFile to fd 2...
	I1212 00:29:42.701815  394399 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1212 00:29:42.702046  394399 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20083-267093/.minikube/bin
	I1212 00:29:42.702261  394399 out.go:352] Setting JSON to false
	I1212 00:29:42.702289  394399 mustload.go:65] Loading cluster: multinode-255093
	I1212 00:29:42.702393  394399 notify.go:220] Checking for updates...
	I1212 00:29:42.702704  394399 config.go:182] Loaded profile config "multinode-255093": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 00:29:42.702719  394399 status.go:174] checking status of multinode-255093 ...
	I1212 00:29:42.703536  394399 cli_runner.go:164] Run: docker container inspect multinode-255093 --format={{.State.Status}}
	I1212 00:29:42.722086  394399 status.go:371] multinode-255093 host status = "Stopped" (err=<nil>)
	I1212 00:29:42.722113  394399 status.go:384] host is not running, skipping remaining checks
	I1212 00:29:42.722120  394399 status.go:176] multinode-255093 status: &{Name:multinode-255093 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 00:29:42.722260  394399 status.go:174] checking status of multinode-255093-m02 ...
	I1212 00:29:42.722581  394399 cli_runner.go:164] Run: docker container inspect multinode-255093-m02 --format={{.State.Status}}
	I1212 00:29:42.747780  394399 status.go:371] multinode-255093-m02 host status = "Stopped" (err=<nil>)
	I1212 00:29:42.747803  394399 status.go:384] host is not running, skipping remaining checks
	I1212 00:29:42.747810  394399 status.go:176] multinode-255093-m02 status: &{Name:multinode-255093-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.83s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (53.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-255093 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-255093 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (52.629782563s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255093 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (53.31s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (33.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-255093
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-255093-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-255093-m02 --driver=docker  --container-runtime=crio: exit status 14 (101.039191ms)

                                                
                                                
-- stdout --
	* [multinode-255093-m02] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20083
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20083-267093/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20083-267093/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-255093-m02' is duplicated with machine name 'multinode-255093-m02' in profile 'multinode-255093'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-255093-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-255093-m03 --driver=docker  --container-runtime=crio: (30.768777445s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-255093
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-255093: exit status 80 (316.868653ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-255093 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-255093-m03 already exists in multinode-255093-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-255093-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-255093-m03: (1.955319993s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (33.20s)

                                                
                                    
x
+
TestPreload (130s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-486429 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E1212 00:32:33.045286  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/client.crt: no such file or directory" logger="UnhandledError"
E1212 00:32:37.352142  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/functional-931406/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-486429 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m37.273474104s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-486429 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-486429 image pull gcr.io/k8s-minikube/busybox: (3.32568901s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-486429
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-486429: (5.742066457s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-486429 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-486429 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (20.976245641s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-486429 image list
helpers_test.go:175: Cleaning up "test-preload-486429" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-486429
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-486429: (2.385588728s)
--- PASS: TestPreload (130.00s)

                                                
                                    
x
+
TestScheduledStopUnix (108.54s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-698328 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-698328 --memory=2048 --driver=docker  --container-runtime=crio: (32.5678504s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-698328 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-698328 -n scheduled-stop-698328
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-698328 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1212 00:33:56.437370  272599 retry.go:31] will retry after 135.029µs: open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/scheduled-stop-698328/pid: no such file or directory
I1212 00:33:56.437843  272599 retry.go:31] will retry after 144.453µs: open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/scheduled-stop-698328/pid: no such file or directory
I1212 00:33:56.438966  272599 retry.go:31] will retry after 297.194µs: open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/scheduled-stop-698328/pid: no such file or directory
I1212 00:33:56.440092  272599 retry.go:31] will retry after 349.875µs: open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/scheduled-stop-698328/pid: no such file or directory
I1212 00:33:56.441208  272599 retry.go:31] will retry after 593.408µs: open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/scheduled-stop-698328/pid: no such file or directory
I1212 00:33:56.442321  272599 retry.go:31] will retry after 947.742µs: open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/scheduled-stop-698328/pid: no such file or directory
I1212 00:33:56.443429  272599 retry.go:31] will retry after 898.315µs: open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/scheduled-stop-698328/pid: no such file or directory
I1212 00:33:56.444554  272599 retry.go:31] will retry after 1.791955ms: open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/scheduled-stop-698328/pid: no such file or directory
I1212 00:33:56.446936  272599 retry.go:31] will retry after 2.249181ms: open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/scheduled-stop-698328/pid: no such file or directory
I1212 00:33:56.450188  272599 retry.go:31] will retry after 2.931588ms: open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/scheduled-stop-698328/pid: no such file or directory
I1212 00:33:56.454036  272599 retry.go:31] will retry after 7.929237ms: open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/scheduled-stop-698328/pid: no such file or directory
I1212 00:33:56.462287  272599 retry.go:31] will retry after 10.323594ms: open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/scheduled-stop-698328/pid: no such file or directory
I1212 00:33:56.473510  272599 retry.go:31] will retry after 16.564884ms: open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/scheduled-stop-698328/pid: no such file or directory
I1212 00:33:56.490695  272599 retry.go:31] will retry after 14.552376ms: open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/scheduled-stop-698328/pid: no such file or directory
I1212 00:33:56.505898  272599 retry.go:31] will retry after 24.532018ms: open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/scheduled-stop-698328/pid: no such file or directory
I1212 00:33:56.531197  272599 retry.go:31] will retry after 54.972642ms: open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/scheduled-stop-698328/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-698328 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-698328 -n scheduled-stop-698328
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-698328
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-698328 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-698328
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-698328: exit status 7 (72.433418ms)

                                                
                                                
-- stdout --
	scheduled-stop-698328
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-698328 -n scheduled-stop-698328
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-698328 -n scheduled-stop-698328: exit status 7 (75.25414ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-698328" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-698328
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-698328: (4.308906926s)
--- PASS: TestScheduledStopUnix (108.54s)

                                                
                                    
x
+
TestInsufficientStorage (10.64s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-931387 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-931387 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (8.082254185s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d5dc0af4-c7ac-4333-9248-1ee6f1466ef4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-931387] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5b674dd2-2d01-4afc-a38d-192f00e3995e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20083"}}
	{"specversion":"1.0","id":"f1895f22-c2a9-4318-87cb-921df22a069b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"4704aec2-fe02-4af4-9ca9-bce26b8f5a92","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20083-267093/kubeconfig"}}
	{"specversion":"1.0","id":"d72e5971-7f55-4b93-aa05-40172424f7ac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20083-267093/.minikube"}}
	{"specversion":"1.0","id":"9dd735c9-35ad-467a-8015-38b3a9e3c868","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"121b3e6d-f4fd-41b2-8630-a8567a01c320","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"81bcf178-7b9f-4854-ba41-67ae8d486c85","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"6c0bd9c3-a07f-414d-a31f-a6a19a1b72f2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"ff307a4c-6a49-4227-b406-4a0517486ec8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"aaf3b0b6-ab1a-4ec1-9373-b9bddea5cbce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"61b7c7bf-458e-446b-b45e-d1530c09f40c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-931387\" primary control-plane node in \"insufficient-storage-931387\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"a8e20a73-eb52-43a1-b505-49c0f9c06a95","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1733912881-20083 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"62be89cd-146d-488f-b443-932b51bcf808","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"181b46b7-042a-45df-b3e4-4857100141dd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-931387 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-931387 --output=json --layout=cluster: exit status 7 (323.326087ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-931387","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-931387","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 00:35:20.262934  412106 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-931387" does not appear in /home/jenkins/minikube-integration/20083-267093/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-931387 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-931387 --output=json --layout=cluster: exit status 7 (307.192152ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-931387","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-931387","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 00:35:20.568877  412165 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-931387" does not appear in /home/jenkins/minikube-integration/20083-267093/kubeconfig
	E1212 00:35:20.579175  412165 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/insufficient-storage-931387/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-931387" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-931387
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-931387: (1.921789765s)
--- PASS: TestInsufficientStorage (10.64s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (86.19s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3589821493 start -p running-upgrade-877338 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3589821493 start -p running-upgrade-877338 --memory=2200 --vm-driver=docker  --container-runtime=crio: (34.790096817s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-877338 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1212 00:40:36.112233  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-877338 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (47.621609182s)
helpers_test.go:175: Cleaning up "running-upgrade-877338" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-877338
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-877338: (2.841984463s)
--- PASS: TestRunningBinaryUpgrade (86.19s)

                                                
                                    
x
+
TestKubernetesUpgrade (390.54s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-778025 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-778025 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m16.16184487s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-778025
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-778025: (1.31546454s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-778025 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-778025 status --format={{.Host}}: exit status 7 (119.139882ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-778025 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-778025 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m35.730031854s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-778025 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-778025 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-778025 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (99.357796ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-778025] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20083
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20083-267093/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20083-267093/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-778025
	    minikube start -p kubernetes-upgrade-778025 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7780252 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.2, by running:
	    
	    minikube start -p kubernetes-upgrade-778025 --kubernetes-version=v1.31.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-778025 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-778025 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (34.457857399s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-778025" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-778025
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-778025: (2.563400039s)
--- PASS: TestKubernetesUpgrade (390.54s)

                                                
                                    
x
+
TestMissingContainerUpgrade (155.06s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.247889750 start -p missing-upgrade-443036 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.247889750 start -p missing-upgrade-443036 --memory=2200 --driver=docker  --container-runtime=crio: (1m25.023879194s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-443036
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-443036: (10.447321102s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-443036
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-443036 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1212 00:37:33.044851  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/client.crt: no such file or directory" logger="UnhandledError"
E1212 00:37:37.351502  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/functional-931406/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-443036 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (56.638875091s)
helpers_test.go:175: Cleaning up "missing-upgrade-443036" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-443036
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-443036: (2.147689871s)
--- PASS: TestMissingContainerUpgrade (155.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-882465 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-882465 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (81.981664ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-882465] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20083
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20083-267093/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20083-267093/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (40.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-882465 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-882465 --driver=docker  --container-runtime=crio: (39.783936952s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-882465 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (40.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (6.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-882465 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-882465 --no-kubernetes --driver=docker  --container-runtime=crio: (4.567522696s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-882465 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-882465 status -o json: exit status 2 (348.166582ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-882465","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-882465
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-882465: (2.042413871s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (6.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (10.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-882465 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-882465 --no-kubernetes --driver=docker  --container-runtime=crio: (10.07171975s)
--- PASS: TestNoKubernetes/serial/Start (10.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-882465 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-882465 "sudo systemctl is-active --quiet service kubelet": exit status 1 (356.168776ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-882465
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-882465: (1.267058767s)
--- PASS: TestNoKubernetes/serial/Stop (1.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-882465 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-882465 --driver=docker  --container-runtime=crio: (7.17393888s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-882465 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-882465 "sudo systemctl is-active --quiet service kubelet": exit status 1 (389.851506ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.39s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.46s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.46s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (93.49s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3180925261 start -p stopped-upgrade-222573 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3180925261 start -p stopped-upgrade-222573 --memory=2200 --vm-driver=docker  --container-runtime=crio: (42.137383221s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3180925261 -p stopped-upgrade-222573 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3180925261 -p stopped-upgrade-222573 stop: (2.39040108s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-222573 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-222573 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (48.955258956s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (93.49s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.11s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-222573
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-222573: (1.111073661s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.11s)

                                                
                                    
x
+
TestPause/serial/Start (55.33s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-731652 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-731652 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (55.328873299s)
--- PASS: TestPause/serial/Start (55.33s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (35.75s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-731652 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-731652 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (35.691271598s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (35.75s)

                                                
                                    
x
+
TestPause/serial/Pause (1.13s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-731652 --alsologtostderr -v=5
E1212 00:42:33.044691  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-731652 --alsologtostderr -v=5: (1.125378237s)
--- PASS: TestPause/serial/Pause (1.13s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.4s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-731652 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-731652 --output=json --layout=cluster: exit status 2 (402.076387ms)

                                                
                                                
-- stdout --
	{"Name":"pause-731652","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-731652","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.40s)

                                                
                                    
x
+
TestPause/serial/Unpause (1.09s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-731652 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-linux-arm64 unpause -p pause-731652 --alsologtostderr -v=5: (1.087813635s)
--- PASS: TestPause/serial/Unpause (1.09s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.03s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-731652 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-731652 --alsologtostderr -v=5: (1.030429317s)
--- PASS: TestPause/serial/PauseAgain (1.03s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.78s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-731652 --alsologtostderr -v=5
E1212 00:42:37.351760  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/functional-931406/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-731652 --alsologtostderr -v=5: (2.781912539s)
--- PASS: TestPause/serial/DeletePaused (2.78s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.37s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-731652
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-731652: exit status 1 (16.84997ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-731652: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-589667 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-589667 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (274.040548ms)

                                                
                                                
-- stdout --
	* [false-589667] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20083
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20083-267093/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20083-267093/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:43:07.461591  452295 out.go:345] Setting OutFile to fd 1 ...
	I1212 00:43:07.461787  452295 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1212 00:43:07.461796  452295 out.go:358] Setting ErrFile to fd 2...
	I1212 00:43:07.461802  452295 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1212 00:43:07.462042  452295 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20083-267093/.minikube/bin
	I1212 00:43:07.462497  452295 out.go:352] Setting JSON to false
	I1212 00:43:07.463560  452295 start.go:129] hostinfo: {"hostname":"ip-172-31-24-2","uptime":8729,"bootTime":1733955459,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1072-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1212 00:43:07.463633  452295 start.go:139] virtualization:  
	I1212 00:43:07.465646  452295 out.go:177] * [false-589667] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1212 00:43:07.467084  452295 notify.go:220] Checking for updates...
	I1212 00:43:07.467599  452295 out.go:177]   - MINIKUBE_LOCATION=20083
	I1212 00:43:07.468975  452295 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:43:07.470453  452295 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20083-267093/kubeconfig
	I1212 00:43:07.471670  452295 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20083-267093/.minikube
	I1212 00:43:07.473109  452295 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 00:43:07.474289  452295 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:43:07.476261  452295 config.go:182] Loaded profile config "force-systemd-flag-465270": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 00:43:07.476370  452295 driver.go:394] Setting default libvirt URI to qemu:///system
	I1212 00:43:07.535817  452295 docker.go:123] docker version: linux-27.4.0:Docker Engine - Community
	I1212 00:43:07.535935  452295 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:43:07.639682  452295 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:52 SystemTime:2024-12-12 00:43:07.626598111 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge-n
f-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
	I1212 00:43:07.639800  452295 docker.go:318] overlay module found
	I1212 00:43:07.641397  452295 out.go:177] * Using the docker driver based on user configuration
	I1212 00:43:07.642683  452295 start.go:297] selected driver: docker
	I1212 00:43:07.642703  452295 start.go:901] validating driver "docker" against <nil>
	I1212 00:43:07.642717  452295 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:43:07.644563  452295 out.go:201] 
	W1212 00:43:07.645872  452295 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1212 00:43:07.647029  452295 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-589667 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-589667

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-589667

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-589667

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-589667

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-589667

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-589667

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-589667

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-589667

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-589667

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-589667

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589667"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589667"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589667"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-589667

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589667"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589667"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-589667" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-589667" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-589667" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-589667" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-589667" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-589667" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-589667" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-589667" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589667"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589667"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589667"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589667"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589667"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-589667" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-589667" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-589667" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589667"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589667"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589667"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589667"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589667"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-589667

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589667"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589667"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589667"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589667"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589667"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589667"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589667"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589667"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589667"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589667"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589667"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589667"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589667"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589667"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589667"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589667"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589667"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589667"

                                                
                                                
----------------------- debugLogs end: false-589667 [took: 5.39371355s] --------------------------------
helpers_test.go:175: Cleaning up "false-589667" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-589667
--- PASS: TestNetworkPlugins/group/false (5.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (165.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-768787 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
E1212 00:45:40.432182  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/functional-931406/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-768787 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m45.452045789s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (165.45s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (74.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-988343 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-988343 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (1m14.237863267s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (74.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.76s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-768787 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f2bea097-4efe-45c6-9bbb-30cca7288719] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f2bea097-4efe-45c6-9bbb-30cca7288719] Running
E1212 00:47:33.045300  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.005411433s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-768787 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.76s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.58s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-768787 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-768787 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.37039362s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-768787 describe deploy/metrics-server -n kube-system
E1212 00:47:37.352081  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/functional-931406/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.58s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.74s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-768787 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-768787 --alsologtostderr -v=3: (13.743266357s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.74s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-768787 -n old-k8s-version-768787
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-768787 -n old-k8s-version-768787: exit status 7 (115.58251ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-768787 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (131.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-768787 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-768787 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m11.577137191s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-768787 -n old-k8s-version-768787
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (131.92s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.45s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-988343 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [48fba07f-195c-4717-bdcf-b08b17f6142d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [48fba07f-195c-4717-bdcf-b08b17f6142d] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.005025641s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-988343 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.45s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-988343 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-988343 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.232458292s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-988343 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-988343 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-988343 --alsologtostderr -v=3: (11.973526862s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-988343 -n no-preload-988343
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-988343 -n no-preload-988343: exit status 7 (77.009198ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-988343 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (293.58s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-988343 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-988343 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (4m53.232211194s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-988343 -n no-preload-988343
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (293.58s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-pjtlt" [c32c1e5e-2426-4e79-a5b5-6551d12834fd] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004334204s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-pjtlt" [c32c1e5e-2426-4e79-a5b5-6551d12834fd] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004205138s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-768787 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-768787 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-768787 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-768787 -n old-k8s-version-768787
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-768787 -n old-k8s-version-768787: exit status 2 (342.321962ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-768787 -n old-k8s-version-768787
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-768787 -n old-k8s-version-768787: exit status 2 (337.338431ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-768787 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-768787 -n old-k8s-version-768787
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-768787 -n old-k8s-version-768787
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (51.61s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-423312 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-423312 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (51.608512299s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (51.61s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-423312 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c2fe4d13-12d8-4bc1-83b1-6d9e279b8c3b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c2fe4d13-12d8-4bc1-83b1-6d9e279b8c3b] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.004656804s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-423312 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-423312 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-423312 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.93s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-423312 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-423312 --alsologtostderr -v=3: (11.926585158s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.93s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-423312 -n embed-certs-423312
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-423312 -n embed-certs-423312: exit status 7 (73.32552ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-423312 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (267.79s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-423312 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
E1212 00:52:24.657119  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/old-k8s-version-768787/client.crt: no such file or directory" logger="UnhandledError"
E1212 00:52:24.663571  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/old-k8s-version-768787/client.crt: no such file or directory" logger="UnhandledError"
E1212 00:52:24.674938  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/old-k8s-version-768787/client.crt: no such file or directory" logger="UnhandledError"
E1212 00:52:24.696356  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/old-k8s-version-768787/client.crt: no such file or directory" logger="UnhandledError"
E1212 00:52:24.737742  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/old-k8s-version-768787/client.crt: no such file or directory" logger="UnhandledError"
E1212 00:52:24.819155  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/old-k8s-version-768787/client.crt: no such file or directory" logger="UnhandledError"
E1212 00:52:24.980804  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/old-k8s-version-768787/client.crt: no such file or directory" logger="UnhandledError"
E1212 00:52:25.302711  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/old-k8s-version-768787/client.crt: no such file or directory" logger="UnhandledError"
E1212 00:52:25.944689  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/old-k8s-version-768787/client.crt: no such file or directory" logger="UnhandledError"
E1212 00:52:27.226583  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/old-k8s-version-768787/client.crt: no such file or directory" logger="UnhandledError"
E1212 00:52:29.787935  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/old-k8s-version-768787/client.crt: no such file or directory" logger="UnhandledError"
E1212 00:52:33.044820  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/client.crt: no such file or directory" logger="UnhandledError"
E1212 00:52:34.909374  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/old-k8s-version-768787/client.crt: no such file or directory" logger="UnhandledError"
E1212 00:52:37.351681  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/functional-931406/client.crt: no such file or directory" logger="UnhandledError"
E1212 00:52:45.155011  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/old-k8s-version-768787/client.crt: no such file or directory" logger="UnhandledError"
E1212 00:53:05.640210  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/old-k8s-version-768787/client.crt: no such file or directory" logger="UnhandledError"
E1212 00:53:46.602135  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/old-k8s-version-768787/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-423312 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (4m27.429546888s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-423312 -n embed-certs-423312
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (267.79s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-g9fm5" [8635d107-e739-47df-9dbf-5593af4de263] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005344404s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-g9fm5" [8635d107-e739-47df-9dbf-5593af4de263] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003908874s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-988343 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-988343 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-988343 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-988343 -n no-preload-988343
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-988343 -n no-preload-988343: exit status 2 (315.600504ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-988343 -n no-preload-988343
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-988343 -n no-preload-988343: exit status 2 (339.97934ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-988343 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-988343 -n no-preload-988343
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-988343 -n no-preload-988343
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (50.82s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-065138 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-065138 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (50.820885492s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (50.82s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-065138 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [74f92b2b-a735-4567-8c49-f90811a3035e] Pending
helpers_test.go:344: "busybox" [74f92b2b-a735-4567-8c49-f90811a3035e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1212 00:55:08.524074  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/old-k8s-version-768787/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [74f92b2b-a735-4567-8c49-f90811a3035e] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.005035818s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-065138 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-065138 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-065138 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-065138 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-065138 --alsologtostderr -v=3: (11.968912426s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-065138 -n default-k8s-diff-port-065138
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-065138 -n default-k8s-diff-port-065138: exit status 7 (88.439996ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-065138 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (267.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-065138 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-065138 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (4m26.734420531s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-065138 -n default-k8s-diff-port-065138
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (267.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-txzmp" [a3047f49-b433-49aa-8ab7-be5635c0523a] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003730536s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-txzmp" [a3047f49-b433-49aa-8ab7-be5635c0523a] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004077337s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-423312 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-423312 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-423312 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-423312 -n embed-certs-423312
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-423312 -n embed-certs-423312: exit status 2 (367.466363ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-423312 -n embed-certs-423312
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-423312 -n embed-certs-423312: exit status 2 (341.090796ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-423312 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-423312 -n embed-certs-423312
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-423312 -n embed-certs-423312
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (37.94s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-049162 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-049162 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (37.939521598s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (37.94s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.42s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-049162 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-049162 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.417885801s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.42s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-049162 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-049162 --alsologtostderr -v=3: (1.297420282s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-049162 -n newest-cni-049162
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-049162 -n newest-cni-049162: exit status 7 (79.741712ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-049162 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (16.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-049162 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
E1212 00:57:16.114363  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-049162 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (15.575221334s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-049162 -n newest-cni-049162
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (16.15s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-049162 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-049162 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-049162 -n newest-cni-049162
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-049162 -n newest-cni-049162: exit status 2 (328.440516ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-049162 -n newest-cni-049162
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-049162 -n newest-cni-049162: exit status 2 (331.157318ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-049162 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-049162 -n newest-cni-049162
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-049162 -n newest-cni-049162
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (53.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-589667 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E1212 00:57:24.656919  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/old-k8s-version-768787/client.crt: no such file or directory" logger="UnhandledError"
E1212 00:57:33.044864  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/client.crt: no such file or directory" logger="UnhandledError"
E1212 00:57:37.351574  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/functional-931406/client.crt: no such file or directory" logger="UnhandledError"
E1212 00:57:52.365691  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/old-k8s-version-768787/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-589667 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (53.517527918s)
--- PASS: TestNetworkPlugins/group/auto/Start (53.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-589667 "pgrep -a kubelet"
I1212 00:58:17.900020  272599 config.go:182] Loaded profile config "auto-589667": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-589667 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-djwnq" [88ca7d22-0555-40df-aca1-e21c7031cbb4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-djwnq" [88ca7d22-0555-40df-aca1-e21c7031cbb4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004402206s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-589667 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-589667 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-589667 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (50.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-589667 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E1212 00:58:59.001686  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/no-preload-988343/client.crt: no such file or directory" logger="UnhandledError"
E1212 00:59:19.483823  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/no-preload-988343/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-589667 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (50.562563599s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (50.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-4cnk2" [d8f0a4e5-723a-4608-b637-e66d80eba5c4] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003571893s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-589667 "pgrep -a kubelet"
I1212 00:59:47.173160  272599 config.go:182] Loaded profile config "kindnet-589667": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-589667 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-q5wcn" [80ab3c03-59ce-4170-aaa6-08a0628e00e1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-q5wcn" [80ab3c03-59ce-4170-aaa6-08a0628e00e1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004068291s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-5qnbv" [f977ed36-c378-441f-8d71-6af64087ba29] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.006690695s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-589667 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-589667 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-589667 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-5qnbv" [f977ed36-c378-441f-8d71-6af64087ba29] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004162607s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-065138 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-065138 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-065138 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-065138 --alsologtostderr -v=1: (1.157423755s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-065138 -n default-k8s-diff-port-065138
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-065138 -n default-k8s-diff-port-065138: exit status 2 (390.265789ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-065138 -n default-k8s-diff-port-065138
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-065138 -n default-k8s-diff-port-065138: exit status 2 (432.350329ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-065138 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-065138 -n default-k8s-diff-port-065138
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-065138 -n default-k8s-diff-port-065138
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.06s)
E1212 01:04:06.211648  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/no-preload-988343/client.crt: no such file or directory" logger="UnhandledError"
E1212 01:04:40.091861  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/auto-589667/client.crt: no such file or directory" logger="UnhandledError"
E1212 01:04:40.873417  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/kindnet-589667/client.crt: no such file or directory" logger="UnhandledError"
E1212 01:04:40.879963  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/kindnet-589667/client.crt: no such file or directory" logger="UnhandledError"
E1212 01:04:40.891470  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/kindnet-589667/client.crt: no such file or directory" logger="UnhandledError"
E1212 01:04:40.912952  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/kindnet-589667/client.crt: no such file or directory" logger="UnhandledError"
E1212 01:04:40.954414  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/kindnet-589667/client.crt: no such file or directory" logger="UnhandledError"
E1212 01:04:41.036029  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/kindnet-589667/client.crt: no such file or directory" logger="UnhandledError"
E1212 01:04:41.198171  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/kindnet-589667/client.crt: no such file or directory" logger="UnhandledError"
E1212 01:04:41.519879  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/kindnet-589667/client.crt: no such file or directory" logger="UnhandledError"
E1212 01:04:42.162030  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/kindnet-589667/client.crt: no such file or directory" logger="UnhandledError"
E1212 01:04:43.444391  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/kindnet-589667/client.crt: no such file or directory" logger="UnhandledError"
E1212 01:04:46.007672  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/kindnet-589667/client.crt: no such file or directory" logger="UnhandledError"
E1212 01:04:51.130042  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/kindnet-589667/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (69.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-589667 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-589667 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m9.731581805s)
--- PASS: TestNetworkPlugins/group/calico/Start (69.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (60.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-589667 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E1212 01:01:22.369786  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/no-preload-988343/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-589667 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m0.499313436s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (60.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-589667 "pgrep -a kubelet"
I1212 01:01:24.120618  272599 config.go:182] Loaded profile config "custom-flannel-589667": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-589667 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-z7nkt" [7ac1ef60-7c08-4a4a-aa56-1c1bdcb2332a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-z7nkt" [7ac1ef60-7c08-4a4a-aa56-1c1bdcb2332a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004200119s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-gx76c" [323737dd-5a6c-4cae-b7ee-1c06bf9b5702] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005882954s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-589667 "pgrep -a kubelet"
I1212 01:01:31.962041  272599 config.go:182] Loaded profile config "calico-589667": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-589667 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-bntsq" [09cb0ee9-f6db-496e-a70b-dc1b9b71d393] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-bntsq" [09cb0ee9-f6db-496e-a70b-dc1b9b71d393] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004084413s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-589667 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-589667 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-589667 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-589667 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-589667 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-589667 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (82.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-589667 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-589667 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m22.717829516s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (82.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (60.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-589667 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E1212 01:02:20.433612  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/functional-931406/client.crt: no such file or directory" logger="UnhandledError"
E1212 01:02:24.656845  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/old-k8s-version-768787/client.crt: no such file or directory" logger="UnhandledError"
E1212 01:02:33.044616  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/addons-680529/client.crt: no such file or directory" logger="UnhandledError"
E1212 01:02:37.352119  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/functional-931406/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-589667 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m0.514975798s)
--- PASS: TestNetworkPlugins/group/flannel/Start (60.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-t87pl" [a4c0af62-28c1-4bc5-aaa3-e8bd9675692b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004093712s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-589667 "pgrep -a kubelet"
I1212 01:03:17.311525  272599 config.go:182] Loaded profile config "flannel-589667": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-589667 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-d6c7c" [27ba2b71-d8be-4d43-a451-da048caa0f21] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1212 01:03:18.153791  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/auto-589667/client.crt: no such file or directory" logger="UnhandledError"
E1212 01:03:18.160171  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/auto-589667/client.crt: no such file or directory" logger="UnhandledError"
E1212 01:03:18.171540  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/auto-589667/client.crt: no such file or directory" logger="UnhandledError"
E1212 01:03:18.193032  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/auto-589667/client.crt: no such file or directory" logger="UnhandledError"
E1212 01:03:18.234509  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/auto-589667/client.crt: no such file or directory" logger="UnhandledError"
E1212 01:03:18.316015  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/auto-589667/client.crt: no such file or directory" logger="UnhandledError"
E1212 01:03:18.477527  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/auto-589667/client.crt: no such file or directory" logger="UnhandledError"
E1212 01:03:18.799143  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/auto-589667/client.crt: no such file or directory" logger="UnhandledError"
E1212 01:03:19.440940  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/auto-589667/client.crt: no such file or directory" logger="UnhandledError"
E1212 01:03:20.723247  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/auto-589667/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-d6c7c" [27ba2b71-d8be-4d43-a451-da048caa0f21] Running
E1212 01:03:23.284826  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/auto-589667/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.004663242s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-589667 "pgrep -a kubelet"
I1212 01:03:24.050729  272599 config.go:182] Loaded profile config "enable-default-cni-589667": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-589667 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-5kmmh" [01ccd05e-afb0-4961-a4f2-bd441bcf6777] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-5kmmh" [01ccd05e-afb0-4961-a4f2-bd441bcf6777] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.004581819s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-589667 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-589667 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-589667 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-589667 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-589667 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-589667 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (68.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-589667 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-589667 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m8.492224665s)
--- PASS: TestNetworkPlugins/group/bridge/Start (68.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-589667 "pgrep -a kubelet"
I1212 01:04:59.915959  272599 config.go:182] Loaded profile config "bridge-589667": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-589667 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-92fvs" [5ff444aa-10dc-4c54-8fa9-bc96f9158f77] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1212 01:05:01.372349  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/kindnet-589667/client.crt: no such file or directory" logger="UnhandledError"
E1212 01:05:05.349584  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/default-k8s-diff-port-065138/client.crt: no such file or directory" logger="UnhandledError"
E1212 01:05:05.356091  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/default-k8s-diff-port-065138/client.crt: no such file or directory" logger="UnhandledError"
E1212 01:05:05.367548  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/default-k8s-diff-port-065138/client.crt: no such file or directory" logger="UnhandledError"
E1212 01:05:05.389164  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/default-k8s-diff-port-065138/client.crt: no such file or directory" logger="UnhandledError"
E1212 01:05:05.430629  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/default-k8s-diff-port-065138/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-92fvs" [5ff444aa-10dc-4c54-8fa9-bc96f9158f77] Running
E1212 01:05:05.512863  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/default-k8s-diff-port-065138/client.crt: no such file or directory" logger="UnhandledError"
E1212 01:05:05.674549  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/default-k8s-diff-port-065138/client.crt: no such file or directory" logger="UnhandledError"
E1212 01:05:05.996212  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/default-k8s-diff-port-065138/client.crt: no such file or directory" logger="UnhandledError"
E1212 01:05:06.638182  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/default-k8s-diff-port-065138/client.crt: no such file or directory" logger="UnhandledError"
E1212 01:05:07.920130  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/default-k8s-diff-port-065138/client.crt: no such file or directory" logger="UnhandledError"
E1212 01:05:10.481874  272599 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/default-k8s-diff-port-065138/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.004192499s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-589667 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-589667 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-589667 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    

Test skip (31/330)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.56s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-474606 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-474606" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-474606
--- SKIP: TestDownloadOnlyKic (0.56s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.34s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:789: skipping: crio not supported
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-680529 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.34s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-261992" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-261992
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-589667 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-589667

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-589667

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-589667

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-589667

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-589667

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-589667

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-589667

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-589667

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-589667

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-589667

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589667"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589667"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589667"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-589667

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589667"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589667"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-589667" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-589667" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-589667" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-589667" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-589667" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-589667" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-589667" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-589667" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589667"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589667"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589667"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589667"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589667"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-589667" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-589667" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-589667" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589667"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589667"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589667"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589667"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589667"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-589667

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589667"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589667"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589667"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589667"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589667"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589667"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589667"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589667"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589667"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589667"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589667"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589667"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589667"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589667"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589667"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589667"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589667"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589667"

                                                
                                                
----------------------- debugLogs end: kubenet-589667 [took: 4.633379459s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-589667" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-589667
--- SKIP: TestNetworkPlugins/group/kubenet (4.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-589667 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-589667

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-589667

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-589667

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-589667

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-589667

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-589667

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-589667

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-589667

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-589667

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-589667

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589667"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589667"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589667"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-589667

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589667"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589667"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-589667" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-589667" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-589667" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-589667" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-589667" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-589667" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-589667" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-589667" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589667"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589667"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589667"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589667"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589667"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-589667

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-589667

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-589667" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-589667" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-589667

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-589667

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-589667" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-589667" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-589667" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-589667" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-589667" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589667"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589667"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589667"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589667"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589667"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20083-267093/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 12 Dec 2024 00:43:15 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: force-systemd-flag-465270
contexts:
- context:
cluster: force-systemd-flag-465270
extensions:
- extension:
last-update: Thu, 12 Dec 2024 00:43:15 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: force-systemd-flag-465270
name: force-systemd-flag-465270
current-context: force-systemd-flag-465270
kind: Config
preferences: {}
users:
- name: force-systemd-flag-465270
user:
client-certificate: /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/force-systemd-flag-465270/client.crt
client-key: /home/jenkins/minikube-integration/20083-267093/.minikube/profiles/force-systemd-flag-465270/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-589667

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589667"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589667"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589667"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589667"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589667"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589667"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589667"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589667"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589667"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589667"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589667"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589667"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589667"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589667"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589667"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589667"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589667"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-589667" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589667"

                                                
                                                
----------------------- debugLogs end: cilium-589667 [took: 5.540418876s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-589667" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-589667
--- SKIP: TestNetworkPlugins/group/cilium (5.75s)

                                                
                                    
Copied to clipboard