Test Report: Docker_Linux_crio_arm64 20325

                    
                      73d4c4cc05833259b18cd28e4f502fc92150767e:2025-01-27:38095
                    
                

Test fail (1/330)

Order failed test Duration
36 TestAddons/parallel/Ingress 153.76
x
+
TestAddons/parallel/Ingress (153.76s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-790770 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-790770 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-790770 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [b5a6439b-1835-4871-b16f-1bfacc34f700] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [b5a6439b-1835-4871-b16f-1bfacc34f700] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.004171998s
I0127 14:03:56.233652 1183449 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-790770 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-790770 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.393332182s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-790770 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-790770 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-790770
helpers_test.go:235: (dbg) docker inspect addons-790770:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b4ddec1f821217327b445106d014b6e3fc930cf0c66ab6baa1e14f85dd1c1ce6",
	        "Created": "2025-01-27T13:59:05.6995564Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1184705,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-01-27T13:59:05.855341651Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0434cf58b6dbace281e5de753aa4b2e3fe33dc9a3be53021531403743c3f155a",
	        "ResolvConfPath": "/var/lib/docker/containers/b4ddec1f821217327b445106d014b6e3fc930cf0c66ab6baa1e14f85dd1c1ce6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b4ddec1f821217327b445106d014b6e3fc930cf0c66ab6baa1e14f85dd1c1ce6/hostname",
	        "HostsPath": "/var/lib/docker/containers/b4ddec1f821217327b445106d014b6e3fc930cf0c66ab6baa1e14f85dd1c1ce6/hosts",
	        "LogPath": "/var/lib/docker/containers/b4ddec1f821217327b445106d014b6e3fc930cf0c66ab6baa1e14f85dd1c1ce6/b4ddec1f821217327b445106d014b6e3fc930cf0c66ab6baa1e14f85dd1c1ce6-json.log",
	        "Name": "/addons-790770",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-790770:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-790770",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/ed1b2ff92a3a1aef8664b29f7422d7eab24670237ab71adce0b0ad99425ca30e-init/diff:/var/lib/docker/overlay2/452674851ccac5ec175d01478449b11ec41cb82c3cdcc911527148319e0d3e15/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ed1b2ff92a3a1aef8664b29f7422d7eab24670237ab71adce0b0ad99425ca30e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ed1b2ff92a3a1aef8664b29f7422d7eab24670237ab71adce0b0ad99425ca30e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ed1b2ff92a3a1aef8664b29f7422d7eab24670237ab71adce0b0ad99425ca30e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-790770",
	                "Source": "/var/lib/docker/volumes/addons-790770/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-790770",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-790770",
	                "name.minikube.sigs.k8s.io": "addons-790770",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3fd24f7e0166e142cf7b021f4f25f5c4366beec102df340ba5322c273ca67bd3",
	            "SandboxKey": "/var/run/docker/netns/3fd24f7e0166",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33930"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33931"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33934"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33932"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33933"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-790770": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "7fe96c6da52bdf233e40c8b2562cfb1a8f39e41ae45bedad1f5783cb0765b095",
	                    "EndpointID": "1b3208ed1822882b6fe77340b3870feddb471ce9c8933231a3df1744032929a2",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-790770",
	                        "b4ddec1f8212"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-790770 -n addons-790770
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-790770 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-790770 logs -n 25: (1.662420687s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-017417                                                                     | download-only-017417   | jenkins | v1.35.0 | 27 Jan 25 13:58 UTC | 27 Jan 25 13:58 UTC |
	| start   | --download-only -p                                                                          | download-docker-660192 | jenkins | v1.35.0 | 27 Jan 25 13:58 UTC |                     |
	|         | download-docker-660192                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-660192                                                                   | download-docker-660192 | jenkins | v1.35.0 | 27 Jan 25 13:58 UTC | 27 Jan 25 13:58 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-617848   | jenkins | v1.35.0 | 27 Jan 25 13:58 UTC |                     |
	|         | binary-mirror-617848                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:43587                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-617848                                                                     | binary-mirror-617848   | jenkins | v1.35.0 | 27 Jan 25 13:58 UTC | 27 Jan 25 13:58 UTC |
	| addons  | enable dashboard -p                                                                         | addons-790770          | jenkins | v1.35.0 | 27 Jan 25 13:58 UTC |                     |
	|         | addons-790770                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-790770          | jenkins | v1.35.0 | 27 Jan 25 13:58 UTC |                     |
	|         | addons-790770                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-790770 --wait=true                                                                | addons-790770          | jenkins | v1.35.0 | 27 Jan 25 13:58 UTC | 27 Jan 25 14:02 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	| addons  | addons-790770 addons disable                                                                | addons-790770          | jenkins | v1.35.0 | 27 Jan 25 14:02 UTC | 27 Jan 25 14:02 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-790770 addons disable                                                                | addons-790770          | jenkins | v1.35.0 | 27 Jan 25 14:02 UTC | 27 Jan 25 14:02 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-790770          | jenkins | v1.35.0 | 27 Jan 25 14:02 UTC | 27 Jan 25 14:02 UTC |
	|         | -p addons-790770                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-790770 ip                                                                            | addons-790770          | jenkins | v1.35.0 | 27 Jan 25 14:03 UTC | 27 Jan 25 14:03 UTC |
	| addons  | addons-790770 addons disable                                                                | addons-790770          | jenkins | v1.35.0 | 27 Jan 25 14:03 UTC | 27 Jan 25 14:03 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-790770 addons disable                                                                | addons-790770          | jenkins | v1.35.0 | 27 Jan 25 14:03 UTC | 27 Jan 25 14:03 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-790770 addons                                                                        | addons-790770          | jenkins | v1.35.0 | 27 Jan 25 14:03 UTC | 27 Jan 25 14:03 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-790770 addons                                                                        | addons-790770          | jenkins | v1.35.0 | 27 Jan 25 14:03 UTC | 27 Jan 25 14:03 UTC |
	|         | disable inspektor-gadget                                                                    |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-790770 ssh curl -s                                                                   | addons-790770          | jenkins | v1.35.0 | 27 Jan 25 14:03 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| addons  | addons-790770 addons                                                                        | addons-790770          | jenkins | v1.35.0 | 27 Jan 25 14:04 UTC | 27 Jan 25 14:04 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-790770 addons                                                                        | addons-790770          | jenkins | v1.35.0 | 27 Jan 25 14:04 UTC | 27 Jan 25 14:04 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-790770 ssh cat                                                                       | addons-790770          | jenkins | v1.35.0 | 27 Jan 25 14:04 UTC | 27 Jan 25 14:04 UTC |
	|         | /opt/local-path-provisioner/pvc-7efe29f9-361e-427c-a708-ae898111e7ca_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-790770 addons disable                                                                | addons-790770          | jenkins | v1.35.0 | 27 Jan 25 14:04 UTC | 27 Jan 25 14:05 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-790770 addons disable                                                                | addons-790770          | jenkins | v1.35.0 | 27 Jan 25 14:05 UTC | 27 Jan 25 14:05 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | addons-790770 addons                                                                        | addons-790770          | jenkins | v1.35.0 | 27 Jan 25 14:05 UTC | 27 Jan 25 14:05 UTC |
	|         | disable nvidia-device-plugin                                                                |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-790770 addons                                                                        | addons-790770          | jenkins | v1.35.0 | 27 Jan 25 14:05 UTC | 27 Jan 25 14:05 UTC |
	|         | disable cloud-spanner                                                                       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-790770 ip                                                                            | addons-790770          | jenkins | v1.35.0 | 27 Jan 25 14:06 UTC | 27 Jan 25 14:06 UTC |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 13:58:39
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.23.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 13:58:39.926314 1184207 out.go:345] Setting OutFile to fd 1 ...
	I0127 13:58:39.926502 1184207 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:58:39.926533 1184207 out.go:358] Setting ErrFile to fd 2...
	I0127 13:58:39.926553 1184207 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:58:39.926808 1184207 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20325-1178062/.minikube/bin
	I0127 13:58:39.927297 1184207 out.go:352] Setting JSON to false
	I0127 13:58:39.928273 1184207 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":13271,"bootTime":1737973049,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0127 13:58:39.928406 1184207 start.go:139] virtualization:  
	I0127 13:58:39.932187 1184207 out.go:177] * [addons-790770] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0127 13:58:39.935238 1184207 out.go:177]   - MINIKUBE_LOCATION=20325
	I0127 13:58:39.935395 1184207 notify.go:220] Checking for updates...
	I0127 13:58:39.941036 1184207 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 13:58:39.943943 1184207 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20325-1178062/kubeconfig
	I0127 13:58:39.946826 1184207 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20325-1178062/.minikube
	I0127 13:58:39.949793 1184207 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0127 13:58:39.952758 1184207 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 13:58:39.955796 1184207 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 13:58:39.986225 1184207 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0127 13:58:39.986355 1184207 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 13:58:40.054466 1184207 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:46 SystemTime:2025-01-27 13:58:40.044028953 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0127 13:58:40.054606 1184207 docker.go:318] overlay module found
	I0127 13:58:40.057807 1184207 out.go:177] * Using the docker driver based on user configuration
	I0127 13:58:40.060727 1184207 start.go:297] selected driver: docker
	I0127 13:58:40.060760 1184207 start.go:901] validating driver "docker" against <nil>
	I0127 13:58:40.060779 1184207 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 13:58:40.061612 1184207 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 13:58:40.116908 1184207 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:46 SystemTime:2025-01-27 13:58:40.107231573 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0127 13:58:40.117139 1184207 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 13:58:40.117368 1184207 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 13:58:40.120705 1184207 out.go:177] * Using Docker driver with root privileges
	I0127 13:58:40.123731 1184207 cni.go:84] Creating CNI manager for ""
	I0127 13:58:40.123818 1184207 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0127 13:58:40.123831 1184207 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0127 13:58:40.123934 1184207 start.go:340] cluster config:
	{Name:addons-790770 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-790770 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Network
Plugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPau
seInterval:1m0s}
	I0127 13:58:40.127021 1184207 out.go:177] * Starting "addons-790770" primary control-plane node in "addons-790770" cluster
	I0127 13:58:40.129903 1184207 cache.go:121] Beginning downloading kic base image for docker with crio
	I0127 13:58:40.132943 1184207 out.go:177] * Pulling base image v0.0.46 ...
	I0127 13:58:40.135911 1184207 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 13:58:40.135978 1184207 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20325-1178062/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-arm64.tar.lz4
	I0127 13:58:40.135987 1184207 cache.go:56] Caching tarball of preloaded images
	I0127 13:58:40.136024 1184207 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I0127 13:58:40.136076 1184207 preload.go:172] Found /home/jenkins/minikube-integration/20325-1178062/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0127 13:58:40.136087 1184207 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0127 13:58:40.136454 1184207 profile.go:143] Saving config to /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/addons-790770/config.json ...
	I0127 13:58:40.136489 1184207 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/addons-790770/config.json: {Name:mk59cae690fbd67b08a1a111fde95f7c3ec5ff17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:58:40.152899 1184207 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 to local cache
	I0127 13:58:40.153039 1184207 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local cache directory
	I0127 13:58:40.153065 1184207 image.go:68] Found gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local cache directory, skipping pull
	I0127 13:58:40.153071 1184207 image.go:137] gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 exists in cache, skipping pull
	I0127 13:58:40.153082 1184207 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 as a tarball
	I0127 13:58:40.153089 1184207 cache.go:163] Loading gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 from local cache
	I0127 13:58:57.499452 1184207 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 from cached tarball
	I0127 13:58:57.499495 1184207 cache.go:227] Successfully downloaded all kic artifacts
	I0127 13:58:57.499536 1184207 start.go:360] acquireMachinesLock for addons-790770: {Name:mkffd352b33c72549436eac8c1d2ae2ed6eeb83b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 13:58:57.499675 1184207 start.go:364] duration metric: took 119.82µs to acquireMachinesLock for "addons-790770"
	I0127 13:58:57.499704 1184207 start.go:93] Provisioning new machine with config: &{Name:addons-790770 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-790770 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVM
netClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 13:58:57.499785 1184207 start.go:125] createHost starting for "" (driver="docker")
	I0127 13:58:57.503229 1184207 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0127 13:58:57.503512 1184207 start.go:159] libmachine.API.Create for "addons-790770" (driver="docker")
	I0127 13:58:57.503551 1184207 client.go:168] LocalClient.Create starting
	I0127 13:58:57.503683 1184207 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20325-1178062/.minikube/certs/ca.pem
	I0127 13:58:57.996795 1184207 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20325-1178062/.minikube/certs/cert.pem
	I0127 13:58:59.097725 1184207 cli_runner.go:164] Run: docker network inspect addons-790770 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0127 13:58:59.115377 1184207 cli_runner.go:211] docker network inspect addons-790770 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0127 13:58:59.115459 1184207 network_create.go:284] running [docker network inspect addons-790770] to gather additional debugging logs...
	I0127 13:58:59.115480 1184207 cli_runner.go:164] Run: docker network inspect addons-790770
	W0127 13:58:59.131697 1184207 cli_runner.go:211] docker network inspect addons-790770 returned with exit code 1
	I0127 13:58:59.131728 1184207 network_create.go:287] error running [docker network inspect addons-790770]: docker network inspect addons-790770: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-790770 not found
	I0127 13:58:59.131743 1184207 network_create.go:289] output of [docker network inspect addons-790770]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-790770 not found
	
	** /stderr **
	I0127 13:58:59.131838 1184207 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0127 13:58:59.147917 1184207 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400190be90}
	I0127 13:58:59.147968 1184207 network_create.go:124] attempt to create docker network addons-790770 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0127 13:58:59.148029 1184207 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-790770 addons-790770
	I0127 13:58:59.225570 1184207 network_create.go:108] docker network addons-790770 192.168.49.0/24 created
	I0127 13:58:59.225613 1184207 kic.go:121] calculated static IP "192.168.49.2" for the "addons-790770" container
	I0127 13:58:59.225699 1184207 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0127 13:58:59.241370 1184207 cli_runner.go:164] Run: docker volume create addons-790770 --label name.minikube.sigs.k8s.io=addons-790770 --label created_by.minikube.sigs.k8s.io=true
	I0127 13:58:59.258767 1184207 oci.go:103] Successfully created a docker volume addons-790770
	I0127 13:58:59.258862 1184207 cli_runner.go:164] Run: docker run --rm --name addons-790770-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-790770 --entrypoint /usr/bin/test -v addons-790770:/var gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -d /var/lib
	I0127 13:59:01.404189 1184207 cli_runner.go:217] Completed: docker run --rm --name addons-790770-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-790770 --entrypoint /usr/bin/test -v addons-790770:/var gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -d /var/lib: (2.145287133s)
	I0127 13:59:01.404221 1184207 oci.go:107] Successfully prepared a docker volume addons-790770
	I0127 13:59:01.404243 1184207 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 13:59:01.404263 1184207 kic.go:194] Starting extracting preloaded images to volume ...
	I0127 13:59:01.404333 1184207 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20325-1178062/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-790770:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -I lz4 -xf /preloaded.tar -C /extractDir
	I0127 13:59:05.619049 1184207 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20325-1178062/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-790770:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -I lz4 -xf /preloaded.tar -C /extractDir: (4.214672719s)
	I0127 13:59:05.619086 1184207 kic.go:203] duration metric: took 4.214819985s to extract preloaded images to volume ...
	W0127 13:59:05.619235 1184207 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0127 13:59:05.619353 1184207 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0127 13:59:05.684949 1184207 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-790770 --name addons-790770 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-790770 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-790770 --network addons-790770 --ip 192.168.49.2 --volume addons-790770:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279
	I0127 13:59:06.035981 1184207 cli_runner.go:164] Run: docker container inspect addons-790770 --format={{.State.Running}}
	I0127 13:59:06.072201 1184207 cli_runner.go:164] Run: docker container inspect addons-790770 --format={{.State.Status}}
	I0127 13:59:06.097959 1184207 cli_runner.go:164] Run: docker exec addons-790770 stat /var/lib/dpkg/alternatives/iptables
	I0127 13:59:06.144945 1184207 oci.go:144] the created container "addons-790770" has a running status.
	I0127 13:59:06.144978 1184207 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20325-1178062/.minikube/machines/addons-790770/id_rsa...
	I0127 13:59:06.628703 1184207 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20325-1178062/.minikube/machines/addons-790770/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0127 13:59:06.665123 1184207 cli_runner.go:164] Run: docker container inspect addons-790770 --format={{.State.Status}}
	I0127 13:59:06.687558 1184207 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0127 13:59:06.687582 1184207 kic_runner.go:114] Args: [docker exec --privileged addons-790770 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0127 13:59:06.764774 1184207 cli_runner.go:164] Run: docker container inspect addons-790770 --format={{.State.Status}}
	I0127 13:59:06.791700 1184207 machine.go:93] provisionDockerMachine start ...
	I0127 13:59:06.791794 1184207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-790770
	I0127 13:59:06.821383 1184207 main.go:141] libmachine: Using SSH client type: native
	I0127 13:59:06.821662 1184207 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil>  [] 0s} 127.0.0.1 33930 <nil> <nil>}
	I0127 13:59:06.821672 1184207 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 13:59:06.964662 1184207 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-790770
	
	I0127 13:59:06.964689 1184207 ubuntu.go:169] provisioning hostname "addons-790770"
	I0127 13:59:06.964759 1184207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-790770
	I0127 13:59:06.986018 1184207 main.go:141] libmachine: Using SSH client type: native
	I0127 13:59:06.986275 1184207 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil>  [] 0s} 127.0.0.1 33930 <nil> <nil>}
	I0127 13:59:06.986295 1184207 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-790770 && echo "addons-790770" | sudo tee /etc/hostname
	I0127 13:59:07.125574 1184207 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-790770
	
	I0127 13:59:07.125656 1184207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-790770
	I0127 13:59:07.148261 1184207 main.go:141] libmachine: Using SSH client type: native
	I0127 13:59:07.148579 1184207 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil>  [] 0s} 127.0.0.1 33930 <nil> <nil>}
	I0127 13:59:07.148605 1184207 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-790770' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-790770/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-790770' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 13:59:07.276861 1184207 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 13:59:07.276891 1184207 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20325-1178062/.minikube CaCertPath:/home/jenkins/minikube-integration/20325-1178062/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20325-1178062/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20325-1178062/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20325-1178062/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20325-1178062/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20325-1178062/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20325-1178062/.minikube}
	I0127 13:59:07.276916 1184207 ubuntu.go:177] setting up certificates
	I0127 13:59:07.276927 1184207 provision.go:84] configureAuth start
	I0127 13:59:07.276989 1184207 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-790770
	I0127 13:59:07.294648 1184207 provision.go:143] copyHostCerts
	I0127 13:59:07.294731 1184207 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20325-1178062/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20325-1178062/.minikube/ca.pem (1082 bytes)
	I0127 13:59:07.294852 1184207 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20325-1178062/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20325-1178062/.minikube/cert.pem (1123 bytes)
	I0127 13:59:07.294905 1184207 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20325-1178062/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20325-1178062/.minikube/key.pem (1675 bytes)
	I0127 13:59:07.294949 1184207 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20325-1178062/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20325-1178062/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20325-1178062/.minikube/certs/ca-key.pem org=jenkins.addons-790770 san=[127.0.0.1 192.168.49.2 addons-790770 localhost minikube]
	I0127 13:59:08.095779 1184207 provision.go:177] copyRemoteCerts
	I0127 13:59:08.095855 1184207 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 13:59:08.095898 1184207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-790770
	I0127 13:59:08.114629 1184207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33930 SSHKeyPath:/home/jenkins/minikube-integration/20325-1178062/.minikube/machines/addons-790770/id_rsa Username:docker}
	I0127 13:59:08.205720 1184207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20325-1178062/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0127 13:59:08.230383 1184207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20325-1178062/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0127 13:59:08.254782 1184207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20325-1178062/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0127 13:59:08.278644 1184207 provision.go:87] duration metric: took 1.001702707s to configureAuth
	I0127 13:59:08.278676 1184207 ubuntu.go:193] setting minikube options for container-runtime
	I0127 13:59:08.278908 1184207 config.go:182] Loaded profile config "addons-790770": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 13:59:08.279020 1184207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-790770
	I0127 13:59:08.295602 1184207 main.go:141] libmachine: Using SSH client type: native
	I0127 13:59:08.295859 1184207 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil>  [] 0s} 127.0.0.1 33930 <nil> <nil>}
	I0127 13:59:08.295885 1184207 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 13:59:08.518414 1184207 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 13:59:08.518441 1184207 machine.go:96] duration metric: took 1.726722485s to provisionDockerMachine
	I0127 13:59:08.518452 1184207 client.go:171] duration metric: took 11.014891378s to LocalClient.Create
	I0127 13:59:08.518465 1184207 start.go:167] duration metric: took 11.01495505s to libmachine.API.Create "addons-790770"
	I0127 13:59:08.518473 1184207 start.go:293] postStartSetup for "addons-790770" (driver="docker")
	I0127 13:59:08.518485 1184207 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 13:59:08.518548 1184207 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 13:59:08.518593 1184207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-790770
	I0127 13:59:08.535326 1184207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33930 SSHKeyPath:/home/jenkins/minikube-integration/20325-1178062/.minikube/machines/addons-790770/id_rsa Username:docker}
	I0127 13:59:08.626178 1184207 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 13:59:08.629319 1184207 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0127 13:59:08.629357 1184207 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0127 13:59:08.629370 1184207 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0127 13:59:08.629378 1184207 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0127 13:59:08.629396 1184207 filesync.go:126] Scanning /home/jenkins/minikube-integration/20325-1178062/.minikube/addons for local assets ...
	I0127 13:59:08.629469 1184207 filesync.go:126] Scanning /home/jenkins/minikube-integration/20325-1178062/.minikube/files for local assets ...
	I0127 13:59:08.629498 1184207 start.go:296] duration metric: took 111.017179ms for postStartSetup
	I0127 13:59:08.629823 1184207 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-790770
	I0127 13:59:08.646947 1184207 profile.go:143] Saving config to /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/addons-790770/config.json ...
	I0127 13:59:08.647249 1184207 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 13:59:08.647312 1184207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-790770
	I0127 13:59:08.663835 1184207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33930 SSHKeyPath:/home/jenkins/minikube-integration/20325-1178062/.minikube/machines/addons-790770/id_rsa Username:docker}
	I0127 13:59:08.749650 1184207 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0127 13:59:08.754276 1184207 start.go:128] duration metric: took 11.254475513s to createHost
	I0127 13:59:08.754299 1184207 start.go:83] releasing machines lock for "addons-790770", held for 11.254614304s
	I0127 13:59:08.754370 1184207 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-790770
	I0127 13:59:08.770902 1184207 ssh_runner.go:195] Run: cat /version.json
	I0127 13:59:08.770956 1184207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-790770
	I0127 13:59:08.771213 1184207 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 13:59:08.771281 1184207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-790770
	I0127 13:59:08.792241 1184207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33930 SSHKeyPath:/home/jenkins/minikube-integration/20325-1178062/.minikube/machines/addons-790770/id_rsa Username:docker}
	I0127 13:59:08.799749 1184207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33930 SSHKeyPath:/home/jenkins/minikube-integration/20325-1178062/.minikube/machines/addons-790770/id_rsa Username:docker}
	I0127 13:59:08.880210 1184207 ssh_runner.go:195] Run: systemctl --version
	I0127 13:59:09.029845 1184207 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 13:59:09.173016 1184207 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0127 13:59:09.178434 1184207 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 13:59:09.203984 1184207 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0127 13:59:09.204068 1184207 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 13:59:09.239817 1184207 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0127 13:59:09.239840 1184207 start.go:495] detecting cgroup driver to use...
	I0127 13:59:09.239873 1184207 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0127 13:59:09.239924 1184207 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 13:59:09.255873 1184207 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 13:59:09.268247 1184207 docker.go:217] disabling cri-docker service (if available) ...
	I0127 13:59:09.268362 1184207 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 13:59:09.285387 1184207 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 13:59:09.301785 1184207 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 13:59:09.406759 1184207 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 13:59:09.504527 1184207 docker.go:233] disabling docker service ...
	I0127 13:59:09.504597 1184207 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 13:59:09.526249 1184207 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 13:59:09.539545 1184207 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 13:59:09.635342 1184207 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 13:59:09.739401 1184207 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 13:59:09.750985 1184207 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 13:59:09.767315 1184207 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0127 13:59:09.767405 1184207 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:59:09.777362 1184207 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 13:59:09.777472 1184207 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:59:09.788471 1184207 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:59:09.799796 1184207 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:59:09.810696 1184207 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 13:59:09.822706 1184207 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:59:09.833512 1184207 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:59:09.851086 1184207 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:59:09.861274 1184207 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 13:59:09.869992 1184207 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 13:59:09.878621 1184207 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 13:59:09.965405 1184207 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 13:59:10.077472 1184207 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 13:59:10.077562 1184207 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 13:59:10.081964 1184207 start.go:563] Will wait 60s for crictl version
	I0127 13:59:10.082030 1184207 ssh_runner.go:195] Run: which crictl
	I0127 13:59:10.085690 1184207 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 13:59:10.123743 1184207 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0127 13:59:10.123885 1184207 ssh_runner.go:195] Run: crio --version
	I0127 13:59:10.161978 1184207 ssh_runner.go:195] Run: crio --version
	I0127 13:59:10.206028 1184207 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.24.6 ...
	I0127 13:59:10.208962 1184207 cli_runner.go:164] Run: docker network inspect addons-790770 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0127 13:59:10.224662 1184207 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0127 13:59:10.228443 1184207 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 13:59:10.239198 1184207 kubeadm.go:883] updating cluster {Name:addons-790770 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-790770 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 13:59:10.239322 1184207 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 13:59:10.239382 1184207 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 13:59:10.322462 1184207 crio.go:514] all images are preloaded for cri-o runtime.
	I0127 13:59:10.322490 1184207 crio.go:433] Images already preloaded, skipping extraction
	I0127 13:59:10.322545 1184207 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 13:59:10.361530 1184207 crio.go:514] all images are preloaded for cri-o runtime.
	I0127 13:59:10.361555 1184207 cache_images.go:84] Images are preloaded, skipping loading
	I0127 13:59:10.361563 1184207 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.32.1 crio true true} ...
	I0127 13:59:10.361657 1184207 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-790770 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:addons-790770 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 13:59:10.361742 1184207 ssh_runner.go:195] Run: crio config
	I0127 13:59:10.413260 1184207 cni.go:84] Creating CNI manager for ""
	I0127 13:59:10.413290 1184207 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0127 13:59:10.413305 1184207 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 13:59:10.413329 1184207 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-790770 NodeName:addons-790770 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 13:59:10.413496 1184207 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-790770"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 13:59:10.413600 1184207 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 13:59:10.423199 1184207 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 13:59:10.423294 1184207 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 13:59:10.432462 1184207 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0127 13:59:10.450542 1184207 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 13:59:10.468340 1184207 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2287 bytes)
	I0127 13:59:10.486883 1184207 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0127 13:59:10.491222 1184207 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 13:59:10.501989 1184207 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 13:59:10.591981 1184207 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 13:59:10.605228 1184207 certs.go:68] Setting up /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/addons-790770 for IP: 192.168.49.2
	I0127 13:59:10.605254 1184207 certs.go:194] generating shared ca certs ...
	I0127 13:59:10.605272 1184207 certs.go:226] acquiring lock for ca certs: {Name:mkc41c7b23c25c519f35097ca495c3081fb96f62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:59:10.605458 1184207 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20325-1178062/.minikube/ca.key
	I0127 13:59:11.128954 1184207 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20325-1178062/.minikube/ca.crt ...
	I0127 13:59:11.128985 1184207 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20325-1178062/.minikube/ca.crt: {Name:mkf635e07ac4563627161e42e94f63a2d00a85d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:59:11.129791 1184207 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20325-1178062/.minikube/ca.key ...
	I0127 13:59:11.129811 1184207 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20325-1178062/.minikube/ca.key: {Name:mkbafd406190c78e6b76b81ecdd2a958e34fa5f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:59:11.129948 1184207 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20325-1178062/.minikube/proxy-client-ca.key
	I0127 13:59:11.342282 1184207 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20325-1178062/.minikube/proxy-client-ca.crt ...
	I0127 13:59:11.342312 1184207 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20325-1178062/.minikube/proxy-client-ca.crt: {Name:mk7c8ba7dbdb32d4e487f83e0172124ad9aa9b34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:59:11.342481 1184207 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20325-1178062/.minikube/proxy-client-ca.key ...
	I0127 13:59:11.342495 1184207 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20325-1178062/.minikube/proxy-client-ca.key: {Name:mkab359f52d5a4340ed14e8e9de7835ce14734b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:59:11.342572 1184207 certs.go:256] generating profile certs ...
	I0127 13:59:11.342631 1184207 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/addons-790770/client.key
	I0127 13:59:11.342659 1184207 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/addons-790770/client.crt with IP's: []
	I0127 13:59:11.739341 1184207 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/addons-790770/client.crt ...
	I0127 13:59:11.739373 1184207 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/addons-790770/client.crt: {Name:mka45a4bf9b97850090cb1892a369a1f4f07763f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:59:11.739566 1184207 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/addons-790770/client.key ...
	I0127 13:59:11.739579 1184207 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/addons-790770/client.key: {Name:mkfbf0928e330399cc726257a06a385bf371e387 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:59:11.740303 1184207 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/addons-790770/apiserver.key.b2620e3a
	I0127 13:59:11.740332 1184207 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/addons-790770/apiserver.crt.b2620e3a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0127 13:59:12.254115 1184207 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/addons-790770/apiserver.crt.b2620e3a ...
	I0127 13:59:12.254151 1184207 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/addons-790770/apiserver.crt.b2620e3a: {Name:mk498ad14f8ea4c18e71a88f2b696763a04dbcc7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:59:12.254345 1184207 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/addons-790770/apiserver.key.b2620e3a ...
	I0127 13:59:12.254361 1184207 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/addons-790770/apiserver.key.b2620e3a: {Name:mk2e9638cbf57d528039cbaac3f0ef95ece552bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:59:12.254452 1184207 certs.go:381] copying /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/addons-790770/apiserver.crt.b2620e3a -> /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/addons-790770/apiserver.crt
	I0127 13:59:12.254530 1184207 certs.go:385] copying /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/addons-790770/apiserver.key.b2620e3a -> /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/addons-790770/apiserver.key
	I0127 13:59:12.254591 1184207 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/addons-790770/proxy-client.key
	I0127 13:59:12.254613 1184207 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/addons-790770/proxy-client.crt with IP's: []
	I0127 13:59:12.790619 1184207 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/addons-790770/proxy-client.crt ...
	I0127 13:59:12.790651 1184207 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/addons-790770/proxy-client.crt: {Name:mkfac398f7c9997f3aa83bb19b842576d4cbc943 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:59:12.790838 1184207 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/addons-790770/proxy-client.key ...
	I0127 13:59:12.790853 1184207 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/addons-790770/proxy-client.key: {Name:mk3a28b68cef8ff804cf7449021a52b114ce72c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:59:12.791055 1184207 certs.go:484] found cert: /home/jenkins/minikube-integration/20325-1178062/.minikube/certs/ca-key.pem (1675 bytes)
	I0127 13:59:12.791100 1184207 certs.go:484] found cert: /home/jenkins/minikube-integration/20325-1178062/.minikube/certs/ca.pem (1082 bytes)
	I0127 13:59:12.791138 1184207 certs.go:484] found cert: /home/jenkins/minikube-integration/20325-1178062/.minikube/certs/cert.pem (1123 bytes)
	I0127 13:59:12.791169 1184207 certs.go:484] found cert: /home/jenkins/minikube-integration/20325-1178062/.minikube/certs/key.pem (1675 bytes)
	I0127 13:59:12.791854 1184207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20325-1178062/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 13:59:12.816526 1184207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20325-1178062/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0127 13:59:12.846821 1184207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20325-1178062/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 13:59:12.872949 1184207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20325-1178062/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0127 13:59:12.900926 1184207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/addons-790770/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0127 13:59:12.925609 1184207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/addons-790770/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0127 13:59:12.949474 1184207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/addons-790770/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 13:59:12.972735 1184207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/addons-790770/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 13:59:12.996422 1184207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20325-1178062/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 13:59:13.022706 1184207 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 13:59:13.041177 1184207 ssh_runner.go:195] Run: openssl version
	I0127 13:59:13.046804 1184207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 13:59:13.057046 1184207 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 13:59:13.060622 1184207 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 13:59 /usr/share/ca-certificates/minikubeCA.pem
	I0127 13:59:13.060732 1184207 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 13:59:13.067791 1184207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 13:59:13.077346 1184207 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 13:59:13.080876 1184207 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0127 13:59:13.080943 1184207 kubeadm.go:392] StartCluster: {Name:addons-790770 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-790770 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientP
ath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 13:59:13.081040 1184207 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 13:59:13.081105 1184207 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 13:59:13.118015 1184207 cri.go:89] found id: ""
	I0127 13:59:13.118136 1184207 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 13:59:13.127266 1184207 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 13:59:13.136895 1184207 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0127 13:59:13.137013 1184207 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 13:59:13.146260 1184207 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 13:59:13.146329 1184207 kubeadm.go:157] found existing configuration files:
	
	I0127 13:59:13.146387 1184207 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 13:59:13.155308 1184207 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 13:59:13.155428 1184207 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 13:59:13.164309 1184207 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 13:59:13.173507 1184207 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 13:59:13.173661 1184207 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 13:59:13.182488 1184207 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 13:59:13.191317 1184207 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 13:59:13.191408 1184207 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 13:59:13.200184 1184207 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 13:59:13.209494 1184207 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 13:59:13.209584 1184207 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 13:59:13.218377 1184207 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0127 13:59:13.262428 1184207 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 13:59:13.262497 1184207 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 13:59:13.281710 1184207 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0127 13:59:13.281852 1184207 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1075-aws
	I0127 13:59:13.281934 1184207 kubeadm.go:310] OS: Linux
	I0127 13:59:13.282030 1184207 kubeadm.go:310] CGROUPS_CPU: enabled
	I0127 13:59:13.282130 1184207 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0127 13:59:13.282212 1184207 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0127 13:59:13.282295 1184207 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0127 13:59:13.282380 1184207 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0127 13:59:13.282496 1184207 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0127 13:59:13.282581 1184207 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0127 13:59:13.282669 1184207 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0127 13:59:13.282753 1184207 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0127 13:59:13.343090 1184207 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 13:59:13.343267 1184207 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 13:59:13.343401 1184207 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 13:59:13.350292 1184207 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 13:59:13.357235 1184207 out.go:235]   - Generating certificates and keys ...
	I0127 13:59:13.357336 1184207 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 13:59:13.357406 1184207 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 13:59:13.694471 1184207 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0127 13:59:14.031014 1184207 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0127 13:59:14.837667 1184207 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0127 13:59:15.463136 1184207 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0127 13:59:16.057489 1184207 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0127 13:59:16.057869 1184207 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-790770 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0127 13:59:16.345761 1184207 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0127 13:59:16.346174 1184207 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-790770 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0127 13:59:17.237067 1184207 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0127 13:59:17.814829 1184207 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0127 13:59:18.504183 1184207 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0127 13:59:18.504480 1184207 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 13:59:19.180800 1184207 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 13:59:19.481858 1184207 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 13:59:19.958367 1184207 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 13:59:20.454513 1184207 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 13:59:21.199777 1184207 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 13:59:21.200488 1184207 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 13:59:21.203304 1184207 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 13:59:21.206929 1184207 out.go:235]   - Booting up control plane ...
	I0127 13:59:21.207046 1184207 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 13:59:21.207125 1184207 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 13:59:21.207202 1184207 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 13:59:21.216263 1184207 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 13:59:21.222763 1184207 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 13:59:21.222992 1184207 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 13:59:21.318488 1184207 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 13:59:21.318630 1184207 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 13:59:22.320217 1184207 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001860534s
	I0127 13:59:22.320314 1184207 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 13:59:28.322363 1184207 kubeadm.go:310] [api-check] The API server is healthy after 6.002146594s
	I0127 13:59:28.345039 1184207 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 13:59:28.359182 1184207 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 13:59:28.388222 1184207 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 13:59:28.388428 1184207 kubeadm.go:310] [mark-control-plane] Marking the node addons-790770 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 13:59:28.398180 1184207 kubeadm.go:310] [bootstrap-token] Using token: icqcds.2m85puvkdui5ohul
	I0127 13:59:28.403023 1184207 out.go:235]   - Configuring RBAC rules ...
	I0127 13:59:28.403161 1184207 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 13:59:28.405960 1184207 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 13:59:28.414914 1184207 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 13:59:28.418915 1184207 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 13:59:28.422912 1184207 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 13:59:28.426906 1184207 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 13:59:28.728737 1184207 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 13:59:29.189493 1184207 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 13:59:29.729605 1184207 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 13:59:29.730632 1184207 kubeadm.go:310] 
	I0127 13:59:29.730701 1184207 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 13:59:29.730708 1184207 kubeadm.go:310] 
	I0127 13:59:29.730780 1184207 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 13:59:29.730790 1184207 kubeadm.go:310] 
	I0127 13:59:29.730814 1184207 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 13:59:29.730869 1184207 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 13:59:29.730916 1184207 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 13:59:29.730921 1184207 kubeadm.go:310] 
	I0127 13:59:29.730977 1184207 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 13:59:29.730982 1184207 kubeadm.go:310] 
	I0127 13:59:29.731027 1184207 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 13:59:29.731031 1184207 kubeadm.go:310] 
	I0127 13:59:29.731079 1184207 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 13:59:29.731149 1184207 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 13:59:29.731213 1184207 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 13:59:29.731217 1184207 kubeadm.go:310] 
	I0127 13:59:29.731296 1184207 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 13:59:29.731368 1184207 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 13:59:29.731373 1184207 kubeadm.go:310] 
	I0127 13:59:29.731450 1184207 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token icqcds.2m85puvkdui5ohul \
	I0127 13:59:29.731547 1184207 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b64325a2f96b9ba71b59ccd8d6ea566e56803781d362f3f51dbafe1ec1b4a36e \
	I0127 13:59:29.731567 1184207 kubeadm.go:310] 	--control-plane 
	I0127 13:59:29.731571 1184207 kubeadm.go:310] 
	I0127 13:59:29.731656 1184207 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 13:59:29.731661 1184207 kubeadm.go:310] 
	I0127 13:59:29.731737 1184207 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token icqcds.2m85puvkdui5ohul \
	I0127 13:59:29.731831 1184207 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b64325a2f96b9ba71b59ccd8d6ea566e56803781d362f3f51dbafe1ec1b4a36e 
	I0127 13:59:29.734566 1184207 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0127 13:59:29.734796 1184207 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1075-aws\n", err: exit status 1
	I0127 13:59:29.734905 1184207 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 13:59:29.734921 1184207 cni.go:84] Creating CNI manager for ""
	I0127 13:59:29.734928 1184207 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0127 13:59:29.739884 1184207 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0127 13:59:29.742770 1184207 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0127 13:59:29.746529 1184207 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.1/kubectl ...
	I0127 13:59:29.746549 1184207 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0127 13:59:29.765544 1184207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0127 13:59:30.081509 1184207 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 13:59:30.081671 1184207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:59:30.081724 1184207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-790770 minikube.k8s.io/updated_at=2025_01_27T13_59_30_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=6a5089c94d5c3e26f81a121b7614c4f7f440f9c0 minikube.k8s.io/name=addons-790770 minikube.k8s.io/primary=true
	I0127 13:59:30.265448 1184207 ops.go:34] apiserver oom_adj: -16
	I0127 13:59:30.265562 1184207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:59:30.765690 1184207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:59:31.266369 1184207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:59:31.765678 1184207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:59:32.266015 1184207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:59:32.766135 1184207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:59:33.266097 1184207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:59:33.765614 1184207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:59:34.266233 1184207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:59:34.366794 1184207 kubeadm.go:1113] duration metric: took 4.285195625s to wait for elevateKubeSystemPrivileges
	I0127 13:59:34.366823 1184207 kubeadm.go:394] duration metric: took 21.285883825s to StartCluster
	I0127 13:59:34.366840 1184207 settings.go:142] acquiring lock: {Name:mka086192abcf59b90623971ab3be6b1797431eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:59:34.367592 1184207 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20325-1178062/kubeconfig
	I0127 13:59:34.368017 1184207 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20325-1178062/kubeconfig: {Name:mkc2214a456e2140c56426de06bc4d16ad1c8ddb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:59:34.368231 1184207 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 13:59:34.368411 1184207 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0127 13:59:34.368682 1184207 config.go:182] Loaded profile config "addons-790770": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 13:59:34.368717 1184207 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0127 13:59:34.368825 1184207 addons.go:69] Setting yakd=true in profile "addons-790770"
	I0127 13:59:34.368843 1184207 addons.go:238] Setting addon yakd=true in "addons-790770"
	I0127 13:59:34.368866 1184207 host.go:66] Checking if "addons-790770" exists ...
	I0127 13:59:34.368910 1184207 addons.go:69] Setting inspektor-gadget=true in profile "addons-790770"
	I0127 13:59:34.368928 1184207 addons.go:238] Setting addon inspektor-gadget=true in "addons-790770"
	I0127 13:59:34.368953 1184207 host.go:66] Checking if "addons-790770" exists ...
	I0127 13:59:34.369402 1184207 cli_runner.go:164] Run: docker container inspect addons-790770 --format={{.State.Status}}
	I0127 13:59:34.369418 1184207 addons.go:69] Setting metrics-server=true in profile "addons-790770"
	I0127 13:59:34.369432 1184207 addons.go:238] Setting addon metrics-server=true in "addons-790770"
	I0127 13:59:34.369450 1184207 host.go:66] Checking if "addons-790770" exists ...
	I0127 13:59:34.369840 1184207 cli_runner.go:164] Run: docker container inspect addons-790770 --format={{.State.Status}}
	I0127 13:59:34.370330 1184207 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-790770"
	I0127 13:59:34.370354 1184207 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-790770"
	I0127 13:59:34.370384 1184207 host.go:66] Checking if "addons-790770" exists ...
	I0127 13:59:34.370837 1184207 cli_runner.go:164] Run: docker container inspect addons-790770 --format={{.State.Status}}
	I0127 13:59:34.373747 1184207 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-790770"
	I0127 13:59:34.374113 1184207 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-790770"
	I0127 13:59:34.374269 1184207 host.go:66] Checking if "addons-790770" exists ...
	I0127 13:59:34.373909 1184207 addons.go:69] Setting registry=true in profile "addons-790770"
	I0127 13:59:34.374575 1184207 addons.go:238] Setting addon registry=true in "addons-790770"
	I0127 13:59:34.374599 1184207 host.go:66] Checking if "addons-790770" exists ...
	I0127 13:59:34.375056 1184207 cli_runner.go:164] Run: docker container inspect addons-790770 --format={{.State.Status}}
	I0127 13:59:34.373922 1184207 addons.go:69] Setting storage-provisioner=true in profile "addons-790770"
	I0127 13:59:34.376556 1184207 addons.go:238] Setting addon storage-provisioner=true in "addons-790770"
	I0127 13:59:34.376582 1184207 host.go:66] Checking if "addons-790770" exists ...
	I0127 13:59:34.377105 1184207 cli_runner.go:164] Run: docker container inspect addons-790770 --format={{.State.Status}}
	I0127 13:59:34.380093 1184207 cli_runner.go:164] Run: docker container inspect addons-790770 --format={{.State.Status}}
	I0127 13:59:34.373935 1184207 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-790770"
	I0127 13:59:34.380941 1184207 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-790770"
	I0127 13:59:34.381244 1184207 cli_runner.go:164] Run: docker container inspect addons-790770 --format={{.State.Status}}
	I0127 13:59:34.373943 1184207 addons.go:69] Setting volcano=true in profile "addons-790770"
	I0127 13:59:34.393908 1184207 addons.go:238] Setting addon volcano=true in "addons-790770"
	I0127 13:59:34.393955 1184207 host.go:66] Checking if "addons-790770" exists ...
	I0127 13:59:34.394566 1184207 cli_runner.go:164] Run: docker container inspect addons-790770 --format={{.State.Status}}
	I0127 13:59:34.373949 1184207 addons.go:69] Setting volumesnapshots=true in profile "addons-790770"
	I0127 13:59:34.406882 1184207 addons.go:238] Setting addon volumesnapshots=true in "addons-790770"
	I0127 13:59:34.406931 1184207 host.go:66] Checking if "addons-790770" exists ...
	I0127 13:59:34.407571 1184207 cli_runner.go:164] Run: docker container inspect addons-790770 --format={{.State.Status}}
	I0127 13:59:34.374032 1184207 addons.go:69] Setting ingress=true in profile "addons-790770"
	I0127 13:59:34.421109 1184207 addons.go:238] Setting addon ingress=true in "addons-790770"
	I0127 13:59:34.421161 1184207 host.go:66] Checking if "addons-790770" exists ...
	I0127 13:59:34.421626 1184207 cli_runner.go:164] Run: docker container inspect addons-790770 --format={{.State.Status}}
	I0127 13:59:34.374037 1184207 addons.go:69] Setting cloud-spanner=true in profile "addons-790770"
	I0127 13:59:34.432110 1184207 addons.go:238] Setting addon cloud-spanner=true in "addons-790770"
	I0127 13:59:34.432154 1184207 host.go:66] Checking if "addons-790770" exists ...
	I0127 13:59:34.432661 1184207 cli_runner.go:164] Run: docker container inspect addons-790770 --format={{.State.Status}}
	I0127 13:59:34.374042 1184207 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-790770"
	I0127 13:59:34.448949 1184207 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-790770"
	I0127 13:59:34.448991 1184207 host.go:66] Checking if "addons-790770" exists ...
	I0127 13:59:34.449507 1184207 cli_runner.go:164] Run: docker container inspect addons-790770 --format={{.State.Status}}
	I0127 13:59:34.374046 1184207 addons.go:69] Setting default-storageclass=true in profile "addons-790770"
	I0127 13:59:34.465717 1184207 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-790770"
	I0127 13:59:34.466074 1184207 cli_runner.go:164] Run: docker container inspect addons-790770 --format={{.State.Status}}
	I0127 13:59:34.374050 1184207 addons.go:69] Setting gcp-auth=true in profile "addons-790770"
	I0127 13:59:34.479439 1184207 mustload.go:65] Loading cluster: addons-790770
	I0127 13:59:34.479648 1184207 config.go:182] Loaded profile config "addons-790770": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 13:59:34.479900 1184207 cli_runner.go:164] Run: docker container inspect addons-790770 --format={{.State.Status}}
	I0127 13:59:34.369402 1184207 cli_runner.go:164] Run: docker container inspect addons-790770 --format={{.State.Status}}
	I0127 13:59:34.374059 1184207 addons.go:69] Setting ingress-dns=true in profile "addons-790770"
	I0127 13:59:34.490563 1184207 addons.go:238] Setting addon ingress-dns=true in "addons-790770"
	I0127 13:59:34.490620 1184207 host.go:66] Checking if "addons-790770" exists ...
	I0127 13:59:34.491096 1184207 cli_runner.go:164] Run: docker container inspect addons-790770 --format={{.State.Status}}
	I0127 13:59:34.374091 1184207 out.go:177] * Verifying Kubernetes components...
	I0127 13:59:34.514041 1184207 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 13:59:34.522906 1184207 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0127 13:59:34.523083 1184207 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.36.1
	I0127 13:59:34.526884 1184207 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 13:59:34.526954 1184207 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 13:59:34.527058 1184207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-790770
	I0127 13:59:34.527251 1184207 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0127 13:59:34.527284 1184207 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I0127 13:59:34.527340 1184207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-790770
	I0127 13:59:34.541597 1184207 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0127 13:59:34.553967 1184207 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0127 13:59:34.557227 1184207 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0127 13:59:34.557258 1184207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0127 13:59:34.557325 1184207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-790770
	I0127 13:59:34.564908 1184207 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I0127 13:59:34.565014 1184207 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 13:59:34.575162 1184207 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 13:59:34.575187 1184207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 13:59:34.575283 1184207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-790770
	I0127 13:59:34.575605 1184207 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0127 13:59:34.579114 1184207 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I0127 13:59:34.583069 1184207 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0127 13:59:34.583091 1184207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0127 13:59:34.583159 1184207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-790770
	I0127 13:59:34.583309 1184207 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I0127 13:59:34.587110 1184207 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0127 13:59:34.587989 1184207 out.go:177]   - Using image docker.io/registry:2.8.3
	I0127 13:59:34.590267 1184207 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0127 13:59:34.590551 1184207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0127 13:59:34.590624 1184207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-790770
	I0127 13:59:34.599917 1184207 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0127 13:59:34.599949 1184207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0127 13:59:34.600018 1184207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-790770
	I0127 13:59:34.615643 1184207 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-790770"
	I0127 13:59:34.615747 1184207 host.go:66] Checking if "addons-790770" exists ...
	W0127 13:59:34.655965 1184207 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0127 13:59:34.665336 1184207 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0127 13:59:34.669516 1184207 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0127 13:59:34.669542 1184207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0127 13:59:34.669616 1184207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-790770
	I0127 13:59:34.681527 1184207 cli_runner.go:164] Run: docker container inspect addons-790770 --format={{.State.Status}}
	I0127 13:59:34.701677 1184207 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0127 13:59:34.704507 1184207 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0127 13:59:34.704535 1184207 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0127 13:59:34.704616 1184207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-790770
	I0127 13:59:34.728999 1184207 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0127 13:59:34.731757 1184207 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0127 13:59:34.731781 1184207 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0127 13:59:34.731852 1184207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-790770
	I0127 13:59:34.748983 1184207 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.28
	I0127 13:59:34.758036 1184207 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0127 13:59:34.758063 1184207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0127 13:59:34.758131 1184207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-790770
	I0127 13:59:34.791698 1184207 addons.go:238] Setting addon default-storageclass=true in "addons-790770"
	I0127 13:59:34.791752 1184207 host.go:66] Checking if "addons-790770" exists ...
	I0127 13:59:34.792189 1184207 cli_runner.go:164] Run: docker container inspect addons-790770 --format={{.State.Status}}
	I0127 13:59:34.824572 1184207 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0127 13:59:34.827542 1184207 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0127 13:59:34.831784 1184207 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0127 13:59:34.837377 1184207 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0127 13:59:34.840270 1184207 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0127 13:59:34.844169 1184207 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0127 13:59:34.850489 1184207 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0127 13:59:34.859751 1184207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33930 SSHKeyPath:/home/jenkins/minikube-integration/20325-1178062/.minikube/machines/addons-790770/id_rsa Username:docker}
	I0127 13:59:34.860526 1184207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33930 SSHKeyPath:/home/jenkins/minikube-integration/20325-1178062/.minikube/machines/addons-790770/id_rsa Username:docker}
	I0127 13:59:34.867651 1184207 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0127 13:59:34.868570 1184207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33930 SSHKeyPath:/home/jenkins/minikube-integration/20325-1178062/.minikube/machines/addons-790770/id_rsa Username:docker}
	I0127 13:59:34.870597 1184207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33930 SSHKeyPath:/home/jenkins/minikube-integration/20325-1178062/.minikube/machines/addons-790770/id_rsa Username:docker}
	I0127 13:59:34.871600 1184207 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0127 13:59:34.871637 1184207 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0127 13:59:34.871706 1184207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-790770
	I0127 13:59:34.892021 1184207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33930 SSHKeyPath:/home/jenkins/minikube-integration/20325-1178062/.minikube/machines/addons-790770/id_rsa Username:docker}
	I0127 13:59:34.892470 1184207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33930 SSHKeyPath:/home/jenkins/minikube-integration/20325-1178062/.minikube/machines/addons-790770/id_rsa Username:docker}
	I0127 13:59:34.916730 1184207 host.go:66] Checking if "addons-790770" exists ...
	I0127 13:59:34.916733 1184207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33930 SSHKeyPath:/home/jenkins/minikube-integration/20325-1178062/.minikube/machines/addons-790770/id_rsa Username:docker}
	I0127 13:59:34.968210 1184207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33930 SSHKeyPath:/home/jenkins/minikube-integration/20325-1178062/.minikube/machines/addons-790770/id_rsa Username:docker}
	I0127 13:59:34.984978 1184207 out.go:177]   - Using image docker.io/busybox:stable
	I0127 13:59:34.988591 1184207 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0127 13:59:34.990097 1184207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33930 SSHKeyPath:/home/jenkins/minikube-integration/20325-1178062/.minikube/machines/addons-790770/id_rsa Username:docker}
	I0127 13:59:34.991752 1184207 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0127 13:59:34.991778 1184207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0127 13:59:34.991845 1184207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-790770
	I0127 13:59:34.996267 1184207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33930 SSHKeyPath:/home/jenkins/minikube-integration/20325-1178062/.minikube/machines/addons-790770/id_rsa Username:docker}
	I0127 13:59:35.027127 1184207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33930 SSHKeyPath:/home/jenkins/minikube-integration/20325-1178062/.minikube/machines/addons-790770/id_rsa Username:docker}
	I0127 13:59:35.027336 1184207 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 13:59:35.027349 1184207 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 13:59:35.027412 1184207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-790770
	I0127 13:59:35.027618 1184207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33930 SSHKeyPath:/home/jenkins/minikube-integration/20325-1178062/.minikube/machines/addons-790770/id_rsa Username:docker}
	W0127 13:59:35.036354 1184207 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0127 13:59:35.036384 1184207 retry.go:31] will retry after 283.485132ms: ssh: handshake failed: EOF
	I0127 13:59:35.060992 1184207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33930 SSHKeyPath:/home/jenkins/minikube-integration/20325-1178062/.minikube/machines/addons-790770/id_rsa Username:docker}
	I0127 13:59:35.068470 1184207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33930 SSHKeyPath:/home/jenkins/minikube-integration/20325-1178062/.minikube/machines/addons-790770/id_rsa Username:docker}
	I0127 13:59:35.160470 1184207 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 13:59:35.306440 1184207 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0127 13:59:35.306511 1184207 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0127 13:59:35.322564 1184207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0127 13:59:35.357408 1184207 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0127 13:59:35.357486 1184207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0127 13:59:35.357968 1184207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 13:59:35.391428 1184207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0127 13:59:35.474724 1184207 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0127 13:59:35.474796 1184207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14539 bytes)
	I0127 13:59:35.515100 1184207 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0127 13:59:35.515174 1184207 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0127 13:59:35.536113 1184207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0127 13:59:35.538158 1184207 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0127 13:59:35.538230 1184207 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0127 13:59:35.542268 1184207 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 13:59:35.542333 1184207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0127 13:59:35.544784 1184207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0127 13:59:35.562405 1184207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0127 13:59:35.599036 1184207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0127 13:59:35.623102 1184207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 13:59:35.629870 1184207 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0127 13:59:35.629897 1184207 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0127 13:59:35.656587 1184207 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0127 13:59:35.656615 1184207 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0127 13:59:35.677398 1184207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0127 13:59:35.771950 1184207 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0127 13:59:35.771978 1184207 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0127 13:59:35.782908 1184207 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 13:59:35.782935 1184207 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 13:59:35.829131 1184207 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0127 13:59:35.829157 1184207 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0127 13:59:35.832640 1184207 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0127 13:59:35.832665 1184207 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0127 13:59:35.888713 1184207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0127 13:59:35.966048 1184207 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0127 13:59:35.966075 1184207 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0127 13:59:36.011540 1184207 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 13:59:36.011572 1184207 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 13:59:36.031310 1184207 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0127 13:59:36.031337 1184207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0127 13:59:36.086232 1184207 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0127 13:59:36.086259 1184207 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0127 13:59:36.166857 1184207 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0127 13:59:36.166885 1184207 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0127 13:59:36.178817 1184207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 13:59:36.262940 1184207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0127 13:59:36.282774 1184207 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0127 13:59:36.282803 1184207 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0127 13:59:36.335134 1184207 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0127 13:59:36.335160 1184207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0127 13:59:36.353114 1184207 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0127 13:59:36.353141 1184207 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0127 13:59:36.463818 1184207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0127 13:59:36.465846 1184207 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0127 13:59:36.465869 1184207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0127 13:59:36.653852 1184207 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0127 13:59:36.653881 1184207 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0127 13:59:36.670830 1184207 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.129198533s)
	I0127 13:59:36.670859 1184207 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0127 13:59:36.671951 1184207 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.51139301s)
	I0127 13:59:36.672694 1184207 node_ready.go:35] waiting up to 6m0s for node "addons-790770" to be "Ready" ...
	I0127 13:59:36.791383 1184207 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0127 13:59:36.791410 1184207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0127 13:59:36.950215 1184207 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0127 13:59:36.950241 1184207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0127 13:59:37.053478 1184207 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0127 13:59:37.053562 1184207 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0127 13:59:37.206321 1184207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0127 13:59:37.535080 1184207 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.212426513s)
	I0127 13:59:37.745967 1184207 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-790770" context rescaled to 1 replicas
	I0127 13:59:38.811674 1184207 node_ready.go:53] node "addons-790770" has status "Ready":"False"
	I0127 13:59:39.576246 1184207 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.218222548s)
	I0127 13:59:40.599636 1184207 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.208165748s)
	I0127 13:59:40.599668 1184207 addons.go:479] Verifying addon ingress=true in "addons-790770"
	I0127 13:59:40.599919 1184207 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.063732948s)
	I0127 13:59:40.599981 1184207 addons.go:479] Verifying addon registry=true in "addons-790770"
	I0127 13:59:40.600283 1184207 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (5.055423098s)
	I0127 13:59:40.600348 1184207 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.037921116s)
	I0127 13:59:40.600543 1184207 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.001482838s)
	I0127 13:59:40.600567 1184207 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.97744401s)
	I0127 13:59:40.600729 1184207 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.923307384s)
	I0127 13:59:40.600759 1184207 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.712027549s)
	I0127 13:59:40.600829 1184207 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.421986846s)
	I0127 13:59:40.600839 1184207 addons.go:479] Verifying addon metrics-server=true in "addons-790770"
	I0127 13:59:40.600876 1184207 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.337908812s)
	I0127 13:59:40.604928 1184207 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-790770 service yakd-dashboard -n yakd-dashboard
	
	I0127 13:59:40.605041 1184207 out.go:177] * Verifying ingress addon...
	I0127 13:59:40.605084 1184207 out.go:177] * Verifying registry addon...
	I0127 13:59:40.609682 1184207 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0127 13:59:40.610852 1184207 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0127 13:59:40.637115 1184207 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0127 13:59:40.637194 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:59:40.637635 1184207 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0127 13:59:40.637683 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:59:40.643962 1184207 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.18010284s)
	W0127 13:59:40.644130 1184207 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0127 13:59:40.644169 1184207 retry.go:31] will retry after 252.855899ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	W0127 13:59:40.655863 1184207 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0127 13:59:40.898123 1184207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0127 13:59:41.070723 1184207 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.864295541s)
	I0127 13:59:41.070808 1184207 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-790770"
	I0127 13:59:41.074304 1184207 out.go:177] * Verifying csi-hostpath-driver addon...
	I0127 13:59:41.078038 1184207 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0127 13:59:41.097675 1184207 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0127 13:59:41.097751 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:59:41.114484 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:59:41.115596 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:59:41.176747 1184207 node_ready.go:53] node "addons-790770" has status "Ready":"False"
	I0127 13:59:41.581543 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:59:41.614608 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:59:41.615154 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:59:42.082567 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:59:42.114687 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:59:42.115751 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:59:42.582063 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:59:42.614335 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:59:42.614550 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:59:43.082054 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:59:43.113905 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:59:43.115813 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:59:43.582876 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:59:43.614238 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:59:43.614974 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:59:43.675980 1184207 node_ready.go:53] node "addons-790770" has status "Ready":"False"
	I0127 13:59:43.680726 1184207 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.782555333s)
	I0127 13:59:44.082563 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:59:44.114906 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:59:44.115218 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:59:44.582297 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:59:44.615657 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:59:44.616389 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:59:44.621589 1184207 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0127 13:59:44.621674 1184207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-790770
	I0127 13:59:44.638907 1184207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33930 SSHKeyPath:/home/jenkins/minikube-integration/20325-1178062/.minikube/machines/addons-790770/id_rsa Username:docker}
	I0127 13:59:44.739909 1184207 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0127 13:59:44.759223 1184207 addons.go:238] Setting addon gcp-auth=true in "addons-790770"
	I0127 13:59:44.759279 1184207 host.go:66] Checking if "addons-790770" exists ...
	I0127 13:59:44.759779 1184207 cli_runner.go:164] Run: docker container inspect addons-790770 --format={{.State.Status}}
	I0127 13:59:44.777807 1184207 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0127 13:59:44.777877 1184207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-790770
	I0127 13:59:44.801867 1184207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33930 SSHKeyPath:/home/jenkins/minikube-integration/20325-1178062/.minikube/machines/addons-790770/id_rsa Username:docker}
	I0127 13:59:44.903558 1184207 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0127 13:59:44.906360 1184207 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0127 13:59:44.909168 1184207 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0127 13:59:44.909201 1184207 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0127 13:59:44.928417 1184207 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0127 13:59:44.928485 1184207 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0127 13:59:44.946796 1184207 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0127 13:59:44.946820 1184207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0127 13:59:44.966479 1184207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0127 13:59:45.083498 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:59:45.117769 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:59:45.118552 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:59:45.517837 1184207 addons.go:479] Verifying addon gcp-auth=true in "addons-790770"
	I0127 13:59:45.521026 1184207 out.go:177] * Verifying gcp-auth addon...
	I0127 13:59:45.524781 1184207 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0127 13:59:45.535963 1184207 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0127 13:59:45.535991 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:59:45.635565 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:59:45.636395 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:59:45.637519 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:59:45.676382 1184207 node_ready.go:53] node "addons-790770" has status "Ready":"False"
	I0127 13:59:46.028548 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:59:46.082156 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:59:46.114575 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:59:46.115231 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:59:46.527944 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:59:46.581306 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:59:46.614243 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:59:46.615301 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:59:47.028545 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:59:47.082497 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:59:47.114247 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:59:47.115107 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:59:47.528578 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:59:47.581640 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:59:47.613369 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:59:47.614736 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:59:47.676724 1184207 node_ready.go:53] node "addons-790770" has status "Ready":"False"
	I0127 13:59:48.029284 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:59:48.129683 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:59:48.130611 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:59:48.131131 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:59:48.528157 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:59:48.581982 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:59:48.614619 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:59:48.614997 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:59:49.028540 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:59:49.081976 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:59:49.114550 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:59:49.114856 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:59:49.528457 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:59:49.582663 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:59:49.613953 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:59:49.615110 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:59:50.029248 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:59:50.082516 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:59:50.114265 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:59:50.115477 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:59:50.176711 1184207 node_ready.go:53] node "addons-790770" has status "Ready":"False"
	I0127 13:59:50.528368 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:59:50.582037 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:59:50.614910 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:59:50.615240 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:59:51.027968 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:59:51.082233 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:59:51.113864 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:59:51.115267 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:59:51.528504 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:59:51.581750 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:59:51.614116 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:59:51.615130 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:59:52.028413 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:59:52.082441 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:59:52.115319 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:59:52.115909 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:59:52.528838 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:59:52.581900 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:59:52.613666 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:59:52.614669 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:59:52.676284 1184207 node_ready.go:53] node "addons-790770" has status "Ready":"False"
	I0127 13:59:53.028114 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:59:53.081616 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:59:53.114507 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:59:53.116217 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:59:53.528967 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:59:53.581394 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:59:53.613833 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:59:53.614107 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:59:54.029658 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:59:54.082364 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:59:54.114086 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:59:54.114550 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:59:54.528464 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:59:54.581574 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:59:54.613835 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:59:54.614799 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:59:55.028364 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:59:55.082049 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:59:55.114650 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:59:55.114869 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:59:55.176457 1184207 node_ready.go:53] node "addons-790770" has status "Ready":"False"
	I0127 13:59:55.528943 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:59:55.582226 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:59:55.613479 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:59:55.614417 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:59:56.030740 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:59:56.130058 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:59:56.130354 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:59:56.131922 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:59:56.528702 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:59:56.582140 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:59:56.613733 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:59:56.614835 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:59:57.028841 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:59:57.082081 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:59:57.114444 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:59:57.115391 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:59:57.176491 1184207 node_ready.go:53] node "addons-790770" has status "Ready":"False"
	I0127 13:59:57.528284 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:59:57.582034 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:59:57.614032 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:59:57.614089 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:59:58.028461 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:59:58.082102 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:59:58.115434 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:59:58.116203 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:59:58.529448 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:59:58.581590 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:59:58.613639 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:59:58.615053 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:59:59.028986 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:59:59.081459 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:59:59.113849 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:59:59.115037 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:59:59.177732 1184207 node_ready.go:53] node "addons-790770" has status "Ready":"False"
	I0127 13:59:59.528800 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:59:59.582310 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:59:59.613367 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:59:59.614605 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:00.038710 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:00.086613 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:00.128737 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:00.134566 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:00.543137 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:00.586792 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:00.617828 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:00.621182 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:01.029003 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:01.083849 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:01.114677 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:01.116498 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:01.534342 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:01.582390 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:01.615602 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:01.616864 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:01.676886 1184207 node_ready.go:53] node "addons-790770" has status "Ready":"False"
	I0127 14:00:02.029610 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:02.088116 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:02.115454 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:02.115542 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:02.529842 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:02.581919 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:02.613828 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:02.615021 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:03.028901 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:03.082059 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:03.115050 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:03.115181 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:03.528341 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:03.581769 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:03.614019 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:03.614646 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:04.028159 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:04.082613 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:04.113767 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:04.116292 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:04.176342 1184207 node_ready.go:53] node "addons-790770" has status "Ready":"False"
	I0127 14:00:04.528890 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:04.582215 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:04.614870 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:04.616868 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:05.028065 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:05.081727 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:05.114160 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:05.114505 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:05.528390 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:05.582261 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:05.615032 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:05.615242 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:06.028462 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:06.081999 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:06.114535 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:06.115469 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:06.528948 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:06.581682 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:06.613815 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:06.615337 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:06.676764 1184207 node_ready.go:53] node "addons-790770" has status "Ready":"False"
	I0127 14:00:07.028984 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:07.081444 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:07.115591 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:07.115854 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:07.528825 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:07.582370 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:07.614489 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:07.615851 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:08.028987 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:08.081497 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:08.114612 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:08.114873 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:08.528313 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:08.582481 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:08.629836 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:08.630317 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:09.028033 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:09.082018 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:09.113995 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:09.114400 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:09.178654 1184207 node_ready.go:53] node "addons-790770" has status "Ready":"False"
	I0127 14:00:09.528426 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:09.581355 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:09.614523 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:09.614622 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:10.028723 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:10.082805 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:10.114782 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:10.114959 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:10.528295 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:10.582284 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:10.613644 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:10.613967 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:11.028245 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:11.081788 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:11.114545 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:11.115588 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:11.528694 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:11.582139 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:11.615116 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:11.616073 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:11.676453 1184207 node_ready.go:53] node "addons-790770" has status "Ready":"False"
	I0127 14:00:12.028635 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:12.081875 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:12.114500 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:12.116239 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:12.528611 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:12.581790 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:12.614546 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:12.615512 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:13.029246 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:13.082248 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:13.113857 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:13.114573 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:13.528011 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:13.581222 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:13.613667 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:13.614310 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:14.028719 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:14.081641 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:14.114224 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:14.114893 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:14.176741 1184207 node_ready.go:53] node "addons-790770" has status "Ready":"False"
	I0127 14:00:14.528318 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:14.582255 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:14.613667 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:14.613912 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:15.029276 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:15.082440 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:15.114697 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:15.115021 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:15.528844 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:15.582719 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:15.613689 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:15.614539 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:16.028910 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:16.081983 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:16.114362 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:16.115053 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:16.528094 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:16.581872 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:16.615234 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:16.615336 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:16.676235 1184207 node_ready.go:53] node "addons-790770" has status "Ready":"False"
	I0127 14:00:17.028905 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:17.082254 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:17.115386 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:17.115567 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:17.528800 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:17.582281 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:17.614059 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:17.615670 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:18.029239 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:18.081741 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:18.113557 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:18.113772 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:18.215304 1184207 node_ready.go:49] node "addons-790770" has status "Ready":"True"
	I0127 14:00:18.215385 1184207 node_ready.go:38] duration metric: took 41.542663811s for node "addons-790770" to be "Ready" ...
	I0127 14:00:18.215410 1184207 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 14:00:18.250676 1184207 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-wsktl" in "kube-system" namespace to be "Ready" ...
	I0127 14:00:18.566511 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:18.651064 1184207 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0127 14:00:18.652102 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:18.652074 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:18.659158 1184207 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0127 14:00:18.659180 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:19.039075 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:19.156110 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:19.156780 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:19.157762 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:19.528404 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:19.635172 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:19.635877 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:19.636793 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:20.029569 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:20.083478 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:20.114212 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:20.116079 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:20.261103 1184207 pod_ready.go:103] pod "coredns-668d6bf9bc-wsktl" in "kube-system" namespace has status "Ready":"False"
	I0127 14:00:20.547174 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:20.588112 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:20.622588 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:20.632357 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:20.759035 1184207 pod_ready.go:93] pod "coredns-668d6bf9bc-wsktl" in "kube-system" namespace has status "Ready":"True"
	I0127 14:00:20.759060 1184207 pod_ready.go:82] duration metric: took 2.508292644s for pod "coredns-668d6bf9bc-wsktl" in "kube-system" namespace to be "Ready" ...
	I0127 14:00:20.759081 1184207 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-790770" in "kube-system" namespace to be "Ready" ...
	I0127 14:00:20.765261 1184207 pod_ready.go:93] pod "etcd-addons-790770" in "kube-system" namespace has status "Ready":"True"
	I0127 14:00:20.765289 1184207 pod_ready.go:82] duration metric: took 6.199195ms for pod "etcd-addons-790770" in "kube-system" namespace to be "Ready" ...
	I0127 14:00:20.765305 1184207 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-790770" in "kube-system" namespace to be "Ready" ...
	I0127 14:00:20.771424 1184207 pod_ready.go:93] pod "kube-apiserver-addons-790770" in "kube-system" namespace has status "Ready":"True"
	I0127 14:00:20.771452 1184207 pod_ready.go:82] duration metric: took 6.138649ms for pod "kube-apiserver-addons-790770" in "kube-system" namespace to be "Ready" ...
	I0127 14:00:20.771465 1184207 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-790770" in "kube-system" namespace to be "Ready" ...
	I0127 14:00:20.777306 1184207 pod_ready.go:93] pod "kube-controller-manager-addons-790770" in "kube-system" namespace has status "Ready":"True"
	I0127 14:00:20.777331 1184207 pod_ready.go:82] duration metric: took 5.857344ms for pod "kube-controller-manager-addons-790770" in "kube-system" namespace to be "Ready" ...
	I0127 14:00:20.777346 1184207 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5nnw4" in "kube-system" namespace to be "Ready" ...
	I0127 14:00:20.783461 1184207 pod_ready.go:93] pod "kube-proxy-5nnw4" in "kube-system" namespace has status "Ready":"True"
	I0127 14:00:20.783489 1184207 pod_ready.go:82] duration metric: took 6.135441ms for pod "kube-proxy-5nnw4" in "kube-system" namespace to be "Ready" ...
	I0127 14:00:20.783501 1184207 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-790770" in "kube-system" namespace to be "Ready" ...
	I0127 14:00:21.029631 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:21.083573 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:21.120188 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:21.121587 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:21.156018 1184207 pod_ready.go:93] pod "kube-scheduler-addons-790770" in "kube-system" namespace has status "Ready":"True"
	I0127 14:00:21.156043 1184207 pod_ready.go:82] duration metric: took 372.53488ms for pod "kube-scheduler-addons-790770" in "kube-system" namespace to be "Ready" ...
	I0127 14:00:21.156057 1184207 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-7fbb699795-xwjq9" in "kube-system" namespace to be "Ready" ...
	I0127 14:00:21.529270 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:21.582766 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:21.613919 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:21.616800 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:22.029833 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:22.083774 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:22.114436 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:22.115868 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:22.528235 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:22.584788 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:22.614292 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:22.616382 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:23.029690 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:23.086020 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:23.116097 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:23.118905 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:23.164883 1184207 pod_ready.go:103] pod "metrics-server-7fbb699795-xwjq9" in "kube-system" namespace has status "Ready":"False"
	I0127 14:00:23.528260 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:23.583163 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:23.615891 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:23.616895 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:24.029586 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:24.083367 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:24.116063 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:24.116754 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:24.528983 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:24.582726 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:24.615646 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:24.616345 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:25.028496 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:25.084541 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:25.115395 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:25.115681 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:25.529091 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:25.582608 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:25.614518 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:25.615759 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:25.662140 1184207 pod_ready.go:103] pod "metrics-server-7fbb699795-xwjq9" in "kube-system" namespace has status "Ready":"False"
	I0127 14:00:26.030636 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:26.084086 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:26.117281 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:26.118965 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:26.529053 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:26.583175 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:26.615463 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:26.616866 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:27.029345 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:27.083436 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:27.114521 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:27.115654 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:27.529249 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:27.630953 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:27.634530 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:27.637470 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:28.028904 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:28.083494 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:28.114822 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:28.115067 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:28.163722 1184207 pod_ready.go:103] pod "metrics-server-7fbb699795-xwjq9" in "kube-system" namespace has status "Ready":"False"
	I0127 14:00:28.527791 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:28.583510 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:28.614165 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:28.615904 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:29.028930 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:29.086599 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:29.120438 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:29.124340 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:29.528781 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:29.583731 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:29.615313 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:29.616851 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:30.033422 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:30.134202 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:30.135805 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:30.135811 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:30.528858 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:30.583612 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:30.613967 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:30.615525 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:30.664280 1184207 pod_ready.go:103] pod "metrics-server-7fbb699795-xwjq9" in "kube-system" namespace has status "Ready":"False"
	I0127 14:00:31.028585 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:31.082956 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:31.114349 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:31.116420 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:31.528827 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:31.583006 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:31.613736 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:31.615578 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:32.029330 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:32.083696 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:32.114439 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:32.116784 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:32.528934 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:32.582819 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:32.617128 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:32.618287 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:33.029226 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:33.084291 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:33.117865 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:33.119305 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:33.171544 1184207 pod_ready.go:103] pod "metrics-server-7fbb699795-xwjq9" in "kube-system" namespace has status "Ready":"False"
	I0127 14:00:33.529812 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:33.583640 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:33.619834 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:33.622094 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:34.030143 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:34.084030 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:34.116990 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:34.118582 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:34.529279 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:34.583702 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:34.620908 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:34.622313 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:35.029113 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:35.083507 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:35.132008 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:35.133017 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:35.528069 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:35.584029 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:35.616092 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:35.617070 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:35.665822 1184207 pod_ready.go:103] pod "metrics-server-7fbb699795-xwjq9" in "kube-system" namespace has status "Ready":"False"
	I0127 14:00:36.029507 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:36.084305 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:36.117949 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:36.119162 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:36.529169 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:36.584004 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:36.616506 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:36.618545 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:37.030147 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:37.083520 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:37.114898 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:37.116641 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:37.529269 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:37.583232 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:37.616416 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:37.617733 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:38.029634 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:38.084105 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:38.116498 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:38.118737 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:38.162930 1184207 pod_ready.go:103] pod "metrics-server-7fbb699795-xwjq9" in "kube-system" namespace has status "Ready":"False"
	I0127 14:00:38.528446 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:38.583929 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:38.614037 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:38.615544 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:39.028397 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:39.083838 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:39.116257 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:39.118748 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:39.541852 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:39.642550 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:39.643884 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:39.644844 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:40.031033 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:40.083559 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:40.116496 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:40.117883 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:40.164634 1184207 pod_ready.go:103] pod "metrics-server-7fbb699795-xwjq9" in "kube-system" namespace has status "Ready":"False"
	I0127 14:00:40.529305 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:40.583027 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:40.615058 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:40.615724 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:41.029321 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:41.084864 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:41.114430 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:41.115189 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:41.529286 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:41.583740 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:41.615124 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:41.617539 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:42.029347 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:42.084014 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:42.117976 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:42.119643 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:42.182264 1184207 pod_ready.go:103] pod "metrics-server-7fbb699795-xwjq9" in "kube-system" namespace has status "Ready":"False"
	I0127 14:00:42.529782 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:42.584304 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:42.618235 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:42.620819 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:43.029755 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:43.090327 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:43.126599 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:43.127813 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:43.535536 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:43.588614 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:43.616223 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:43.617874 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:44.029003 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:44.083444 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:44.121264 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:44.122792 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:44.528897 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:44.591522 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:44.662796 1184207 pod_ready.go:103] pod "metrics-server-7fbb699795-xwjq9" in "kube-system" namespace has status "Ready":"False"
	I0127 14:00:44.683820 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:44.687930 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:45.035449 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:45.088456 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:45.141486 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:45.143256 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:45.532026 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:45.583436 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:45.614945 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:45.615330 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:46.028833 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:46.083721 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:46.120186 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:46.121083 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:46.529941 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:46.583647 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:46.617216 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:46.622177 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:46.667341 1184207 pod_ready.go:103] pod "metrics-server-7fbb699795-xwjq9" in "kube-system" namespace has status "Ready":"False"
	I0127 14:00:47.028798 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:47.084294 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:47.117997 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:47.119205 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:47.534176 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:47.595070 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:47.614613 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:47.617811 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:48.031666 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:48.084219 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:48.119335 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:48.121123 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:48.569510 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:48.584322 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:48.616823 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:48.619672 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:48.707723 1184207 pod_ready.go:103] pod "metrics-server-7fbb699795-xwjq9" in "kube-system" namespace has status "Ready":"False"
	I0127 14:00:49.029939 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:49.084554 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:49.115943 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:49.116766 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:49.530616 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:49.584272 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:49.617032 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:49.619257 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:50.033741 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:50.083610 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:50.118509 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:50.119880 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:50.542736 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:50.638748 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:50.639418 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:50.640363 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:50.663105 1184207 pod_ready.go:93] pod "metrics-server-7fbb699795-xwjq9" in "kube-system" namespace has status "Ready":"True"
	I0127 14:00:50.663133 1184207 pod_ready.go:82] duration metric: took 29.507068411s for pod "metrics-server-7fbb699795-xwjq9" in "kube-system" namespace to be "Ready" ...
	I0127 14:00:50.663152 1184207 pod_ready.go:39] duration metric: took 32.447701787s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 14:00:50.663167 1184207 api_server.go:52] waiting for apiserver process to appear ...
	I0127 14:00:50.663198 1184207 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 14:00:50.663269 1184207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 14:00:50.706471 1184207 cri.go:89] found id: "9d6052af2899c662705fa6e3722cb09ef2534baea4800d3dd66727c13f733e94"
	I0127 14:00:50.706496 1184207 cri.go:89] found id: ""
	I0127 14:00:50.706507 1184207 logs.go:282] 1 containers: [9d6052af2899c662705fa6e3722cb09ef2534baea4800d3dd66727c13f733e94]
	I0127 14:00:50.706567 1184207 ssh_runner.go:195] Run: which crictl
	I0127 14:00:50.710762 1184207 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 14:00:50.710862 1184207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 14:00:50.753619 1184207 cri.go:89] found id: "b7a989a14162f7fa77960e45c1d2b7590386cb2f25cb0d69a60e1ef45caabc4f"
	I0127 14:00:50.753642 1184207 cri.go:89] found id: ""
	I0127 14:00:50.753650 1184207 logs.go:282] 1 containers: [b7a989a14162f7fa77960e45c1d2b7590386cb2f25cb0d69a60e1ef45caabc4f]
	I0127 14:00:50.753733 1184207 ssh_runner.go:195] Run: which crictl
	I0127 14:00:50.757278 1184207 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 14:00:50.757402 1184207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 14:00:50.801358 1184207 cri.go:89] found id: "9df93da9b4578b318ebd1c095a4b55308b6aa76560481ef26cb03cc28e6f6a2e"
	I0127 14:00:50.801379 1184207 cri.go:89] found id: ""
	I0127 14:00:50.801388 1184207 logs.go:282] 1 containers: [9df93da9b4578b318ebd1c095a4b55308b6aa76560481ef26cb03cc28e6f6a2e]
	I0127 14:00:50.801446 1184207 ssh_runner.go:195] Run: which crictl
	I0127 14:00:50.805519 1184207 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 14:00:50.805594 1184207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 14:00:50.846344 1184207 cri.go:89] found id: "60863df73259277c6081f4e2e56568c43f6de16503cf7c73b579f8eed272aa01"
	I0127 14:00:50.846366 1184207 cri.go:89] found id: ""
	I0127 14:00:50.846380 1184207 logs.go:282] 1 containers: [60863df73259277c6081f4e2e56568c43f6de16503cf7c73b579f8eed272aa01]
	I0127 14:00:50.846439 1184207 ssh_runner.go:195] Run: which crictl
	I0127 14:00:50.850393 1184207 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 14:00:50.850465 1184207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 14:00:50.890271 1184207 cri.go:89] found id: "37592d3cc643b7da490ad38caaf1d133c7058b03e17c4fc779114e2ea48339f7"
	I0127 14:00:50.890294 1184207 cri.go:89] found id: ""
	I0127 14:00:50.890302 1184207 logs.go:282] 1 containers: [37592d3cc643b7da490ad38caaf1d133c7058b03e17c4fc779114e2ea48339f7]
	I0127 14:00:50.890366 1184207 ssh_runner.go:195] Run: which crictl
	I0127 14:00:50.894309 1184207 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 14:00:50.894383 1184207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 14:00:50.938457 1184207 cri.go:89] found id: "2ef63a2080626bc93b29eb24e3014baa129149ae33e99dcc19af31dc2d4029f3"
	I0127 14:00:50.938487 1184207 cri.go:89] found id: ""
	I0127 14:00:50.938496 1184207 logs.go:282] 1 containers: [2ef63a2080626bc93b29eb24e3014baa129149ae33e99dcc19af31dc2d4029f3]
	I0127 14:00:50.938576 1184207 ssh_runner.go:195] Run: which crictl
	I0127 14:00:50.942252 1184207 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 14:00:50.942359 1184207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 14:00:50.981792 1184207 cri.go:89] found id: "c64cb973388bc0a2909e59b1f91e1db13cb6329bdef0d091f75a11329dd18cd2"
	I0127 14:00:50.981816 1184207 cri.go:89] found id: ""
	I0127 14:00:50.981825 1184207 logs.go:282] 1 containers: [c64cb973388bc0a2909e59b1f91e1db13cb6329bdef0d091f75a11329dd18cd2]
	I0127 14:00:50.981885 1184207 ssh_runner.go:195] Run: which crictl
	I0127 14:00:50.985692 1184207 logs.go:123] Gathering logs for dmesg ...
	I0127 14:00:50.985723 1184207 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 14:00:51.002683 1184207 logs.go:123] Gathering logs for describe nodes ...
	I0127 14:00:51.002710 1184207 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0127 14:00:51.029006 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:51.136398 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:51.139594 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:51.140189 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:51.295833 1184207 logs.go:123] Gathering logs for kube-apiserver [9d6052af2899c662705fa6e3722cb09ef2534baea4800d3dd66727c13f733e94] ...
	I0127 14:00:51.295864 1184207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d6052af2899c662705fa6e3722cb09ef2534baea4800d3dd66727c13f733e94"
	I0127 14:00:51.379291 1184207 logs.go:123] Gathering logs for etcd [b7a989a14162f7fa77960e45c1d2b7590386cb2f25cb0d69a60e1ef45caabc4f] ...
	I0127 14:00:51.379326 1184207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b7a989a14162f7fa77960e45c1d2b7590386cb2f25cb0d69a60e1ef45caabc4f"
	I0127 14:00:51.432785 1184207 logs.go:123] Gathering logs for coredns [9df93da9b4578b318ebd1c095a4b55308b6aa76560481ef26cb03cc28e6f6a2e] ...
	I0127 14:00:51.432903 1184207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9df93da9b4578b318ebd1c095a4b55308b6aa76560481ef26cb03cc28e6f6a2e"
	I0127 14:00:51.506174 1184207 logs.go:123] Gathering logs for kube-scheduler [60863df73259277c6081f4e2e56568c43f6de16503cf7c73b579f8eed272aa01] ...
	I0127 14:00:51.506289 1184207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60863df73259277c6081f4e2e56568c43f6de16503cf7c73b579f8eed272aa01"
	I0127 14:00:51.531661 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:51.578181 1184207 logs.go:123] Gathering logs for kube-proxy [37592d3cc643b7da490ad38caaf1d133c7058b03e17c4fc779114e2ea48339f7] ...
	I0127 14:00:51.578216 1184207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 37592d3cc643b7da490ad38caaf1d133c7058b03e17c4fc779114e2ea48339f7"
	I0127 14:00:51.583496 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:51.616778 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:51.618995 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:51.623749 1184207 logs.go:123] Gathering logs for kubelet ...
	I0127 14:00:51.623777 1184207 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0127 14:00:51.684426 1184207 logs.go:138] Found kubelet problem: Jan 27 13:59:34 addons-790770 kubelet[1517]: I0127 13:59:34.007996    1517 status_manager.go:890] "Failed to get status for pod" podUID="f92c619f-77a1-4342-9902-988504c16123" pod="kube-system/kube-proxy-5nnw4" err="pods \"kube-proxy-5nnw4\" is forbidden: User \"system:node:addons-790770\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-790770' and this object"
	W0127 14:00:51.684699 1184207 logs.go:138] Found kubelet problem: Jan 27 13:59:34 addons-790770 kubelet[1517]: W0127 13:59:34.008164    1517 reflector.go:569] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-790770" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-790770' and this object
	W0127 14:00:51.684988 1184207 logs.go:138] Found kubelet problem: Jan 27 13:59:34 addons-790770 kubelet[1517]: E0127 13:59:34.008200    1517 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-790770\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-790770' and this object" logger="UnhandledError"
	W0127 14:00:51.685216 1184207 logs.go:138] Found kubelet problem: Jan 27 13:59:34 addons-790770 kubelet[1517]: W0127 13:59:34.008265    1517 reflector.go:569] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-790770" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-790770' and this object
	W0127 14:00:51.685463 1184207 logs.go:138] Found kubelet problem: Jan 27 13:59:34 addons-790770 kubelet[1517]: E0127 13:59:34.008285    1517 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:addons-790770\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-790770' and this object" logger="UnhandledError"
	W0127 14:00:51.713683 1184207 logs.go:138] Found kubelet problem: Jan 27 14:00:18 addons-790770 kubelet[1517]: I0127 14:00:18.181572    1517 status_manager.go:890] "Failed to get status for pod" podUID="86366f18-6468-422c-bb02-21ac021422e5" pod="kube-system/coredns-668d6bf9bc-wsktl" err="pods \"coredns-668d6bf9bc-wsktl\" is forbidden: User \"system:node:addons-790770\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-790770' and this object"
	W0127 14:00:51.713925 1184207 logs.go:138] Found kubelet problem: Jan 27 14:00:18 addons-790770 kubelet[1517]: W0127 14:00:18.181675    1517 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-790770" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-790770' and this object
	W0127 14:00:51.714169 1184207 logs.go:138] Found kubelet problem: Jan 27 14:00:18 addons-790770 kubelet[1517]: E0127 14:00:18.181704    1517 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-790770\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-790770' and this object" logger="UnhandledError"
	W0127 14:00:51.714522 1184207 logs.go:138] Found kubelet problem: Jan 27 14:00:18 addons-790770 kubelet[1517]: I0127 14:00:18.186483    1517 status_manager.go:890] "Failed to get status for pod" podUID="86366f18-6468-422c-bb02-21ac021422e5" pod="kube-system/coredns-668d6bf9bc-wsktl" err="pods \"coredns-668d6bf9bc-wsktl\" is forbidden: User \"system:node:addons-790770\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-790770' and this object"
	W0127 14:00:51.714773 1184207 logs.go:138] Found kubelet problem: Jan 27 14:00:18 addons-790770 kubelet[1517]: I0127 14:00:18.189256    1517 status_manager.go:890] "Failed to get status for pod" podUID="146a2460-1b73-4ec7-81c1-2e2d1b8140aa" pod="kube-system/csi-hostpath-resizer-0" err="pods \"csi-hostpath-resizer-0\" is forbidden: User \"system:node:addons-790770\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-790770' and this object"
	W0127 14:00:51.714990 1184207 logs.go:138] Found kubelet problem: Jan 27 14:00:18 addons-790770 kubelet[1517]: W0127 14:00:18.241865    1517 reflector.go:569] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-790770" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-790770' and this object
	W0127 14:00:51.715257 1184207 logs.go:138] Found kubelet problem: Jan 27 14:00:18 addons-790770 kubelet[1517]: E0127 14:00:18.241916    1517 reflector.go:166] "Unhandled Error" err="object-\"ingress-nginx\"/\"ingress-nginx-admission\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ingress-nginx-admission\" is forbidden: User \"system:node:addons-790770\" cannot list resource \"secrets\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-790770' and this object" logger="UnhandledError"
	W0127 14:00:51.715477 1184207 logs.go:138] Found kubelet problem: Jan 27 14:00:18 addons-790770 kubelet[1517]: W0127 14:00:18.241967    1517 reflector.go:569] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-790770" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-790770' and this object
	W0127 14:00:51.715765 1184207 logs.go:138] Found kubelet problem: Jan 27 14:00:18 addons-790770 kubelet[1517]: E0127 14:00:18.241980    1517 reflector.go:166] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-790770\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-790770' and this object" logger="UnhandledError"
	W0127 14:00:51.715985 1184207 logs.go:138] Found kubelet problem: Jan 27 14:00:18 addons-790770 kubelet[1517]: W0127 14:00:18.242025    1517 reflector.go:569] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-790770" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-790770' and this object
	W0127 14:00:51.716250 1184207 logs.go:138] Found kubelet problem: Jan 27 14:00:18 addons-790770 kubelet[1517]: E0127 14:00:18.242036    1517 reflector.go:166] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-790770\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-790770' and this object" logger="UnhandledError"
	W0127 14:00:51.716476 1184207 logs.go:138] Found kubelet problem: Jan 27 14:00:18 addons-790770 kubelet[1517]: W0127 14:00:18.242084    1517 reflector.go:569] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-790770" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-790770' and this object
	W0127 14:00:51.716758 1184207 logs.go:138] Found kubelet problem: Jan 27 14:00:18 addons-790770 kubelet[1517]: E0127 14:00:18.242096    1517 reflector.go:166] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-790770\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-790770' and this object" logger="UnhandledError"
	I0127 14:00:51.751512 1184207 logs.go:123] Gathering logs for kindnet [c64cb973388bc0a2909e59b1f91e1db13cb6329bdef0d091f75a11329dd18cd2] ...
	I0127 14:00:51.751561 1184207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c64cb973388bc0a2909e59b1f91e1db13cb6329bdef0d091f75a11329dd18cd2"
	I0127 14:00:51.798262 1184207 logs.go:123] Gathering logs for CRI-O ...
	I0127 14:00:51.798291 1184207 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 14:00:51.891337 1184207 logs.go:123] Gathering logs for container status ...
	I0127 14:00:51.891382 1184207 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 14:00:51.938511 1184207 logs.go:123] Gathering logs for kube-controller-manager [2ef63a2080626bc93b29eb24e3014baa129149ae33e99dcc19af31dc2d4029f3] ...
	I0127 14:00:51.938544 1184207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ef63a2080626bc93b29eb24e3014baa129149ae33e99dcc19af31dc2d4029f3"
	I0127 14:00:52.006098 1184207 out.go:358] Setting ErrFile to fd 2...
	I0127 14:00:52.006134 1184207 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0127 14:00:52.006204 1184207 out.go:270] X Problems detected in kubelet:
	W0127 14:00:52.006220 1184207 out.go:270]   Jan 27 14:00:18 addons-790770 kubelet[1517]: E0127 14:00:18.241980    1517 reflector.go:166] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-790770\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-790770' and this object" logger="UnhandledError"
	W0127 14:00:52.006236 1184207 out.go:270]   Jan 27 14:00:18 addons-790770 kubelet[1517]: W0127 14:00:18.242025    1517 reflector.go:569] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-790770" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-790770' and this object
	W0127 14:00:52.006246 1184207 out.go:270]   Jan 27 14:00:18 addons-790770 kubelet[1517]: E0127 14:00:18.242036    1517 reflector.go:166] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-790770\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-790770' and this object" logger="UnhandledError"
	W0127 14:00:52.006257 1184207 out.go:270]   Jan 27 14:00:18 addons-790770 kubelet[1517]: W0127 14:00:18.242084    1517 reflector.go:569] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-790770" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-790770' and this object
	W0127 14:00:52.006266 1184207 out.go:270]   Jan 27 14:00:18 addons-790770 kubelet[1517]: E0127 14:00:18.242096    1517 reflector.go:166] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-790770\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-790770' and this object" logger="UnhandledError"
	I0127 14:00:52.006274 1184207 out.go:358] Setting ErrFile to fd 2...
	I0127 14:00:52.006280 1184207 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:00:52.028678 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:52.082999 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:52.115429 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:52.116465 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:52.529075 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:52.582796 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:52.614634 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:52.615740 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:53.028105 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:53.082786 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:53.113906 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:53.115305 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:53.528776 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:53.583414 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:53.615756 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:53.616993 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:54.029279 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:54.083159 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:54.114803 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:54.115876 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:54.528523 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:54.583323 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:54.614747 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:54.616006 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:55.028509 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:55.084409 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:55.121509 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:55.126569 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:55.528475 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:55.583127 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:55.614759 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:55.616122 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:56.029241 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:56.082910 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:56.114428 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:56.115366 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:56.528060 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:56.582654 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:56.615118 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:56.616072 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:57.028927 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:57.084761 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:57.117533 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:57.118703 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:57.528659 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:57.583963 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:57.615782 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:57.617562 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:58.028758 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:58.084389 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:58.118373 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:58.120355 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:58.528886 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:58.583263 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:58.616064 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:58.621799 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:59.029461 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:59.083259 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:59.116339 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:59.117870 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:00:59.529336 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:00:59.585077 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:00:59.616969 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:00:59.622870 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:00.045339 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:01:00.159891 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:01:00.160571 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:00.161106 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:00.531326 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:01:00.584008 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:00.613741 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:00.616024 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:01:01.028867 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:01:01.083834 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:01.115154 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:01.116417 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:01:01.528837 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:01:01.583323 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:01.615306 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:01.616191 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:01:02.009303 1184207 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:01:02.028217 1184207 api_server.go:72] duration metric: took 1m27.659955143s to wait for apiserver process to appear ...
	I0127 14:01:02.028285 1184207 api_server.go:88] waiting for apiserver healthz status ...
	I0127 14:01:02.028328 1184207 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 14:01:02.028404 1184207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 14:01:02.031217 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:01:02.077755 1184207 cri.go:89] found id: "9d6052af2899c662705fa6e3722cb09ef2534baea4800d3dd66727c13f733e94"
	I0127 14:01:02.077796 1184207 cri.go:89] found id: ""
	I0127 14:01:02.077805 1184207 logs.go:282] 1 containers: [9d6052af2899c662705fa6e3722cb09ef2534baea4800d3dd66727c13f733e94]
	I0127 14:01:02.077876 1184207 ssh_runner.go:195] Run: which crictl
	I0127 14:01:02.083940 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:02.084369 1184207 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 14:01:02.084621 1184207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 14:01:02.117912 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:02.120399 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:01:02.155987 1184207 cri.go:89] found id: "b7a989a14162f7fa77960e45c1d2b7590386cb2f25cb0d69a60e1ef45caabc4f"
	I0127 14:01:02.156060 1184207 cri.go:89] found id: ""
	I0127 14:01:02.156097 1184207 logs.go:282] 1 containers: [b7a989a14162f7fa77960e45c1d2b7590386cb2f25cb0d69a60e1ef45caabc4f]
	I0127 14:01:02.156192 1184207 ssh_runner.go:195] Run: which crictl
	I0127 14:01:02.169559 1184207 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 14:01:02.169730 1184207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 14:01:02.376253 1184207 cri.go:89] found id: "9df93da9b4578b318ebd1c095a4b55308b6aa76560481ef26cb03cc28e6f6a2e"
	I0127 14:01:02.376331 1184207 cri.go:89] found id: ""
	I0127 14:01:02.376351 1184207 logs.go:282] 1 containers: [9df93da9b4578b318ebd1c095a4b55308b6aa76560481ef26cb03cc28e6f6a2e]
	I0127 14:01:02.376436 1184207 ssh_runner.go:195] Run: which crictl
	I0127 14:01:02.394162 1184207 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 14:01:02.394294 1184207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 14:01:02.529311 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:01:02.584929 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:02.616744 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:01:02.618350 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:02.830440 1184207 cri.go:89] found id: "60863df73259277c6081f4e2e56568c43f6de16503cf7c73b579f8eed272aa01"
	I0127 14:01:02.830516 1184207 cri.go:89] found id: ""
	I0127 14:01:02.830537 1184207 logs.go:282] 1 containers: [60863df73259277c6081f4e2e56568c43f6de16503cf7c73b579f8eed272aa01]
	I0127 14:01:02.830634 1184207 ssh_runner.go:195] Run: which crictl
	I0127 14:01:02.843553 1184207 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 14:01:02.843700 1184207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 14:01:03.026118 1184207 cri.go:89] found id: "37592d3cc643b7da490ad38caaf1d133c7058b03e17c4fc779114e2ea48339f7"
	I0127 14:01:03.026203 1184207 cri.go:89] found id: ""
	I0127 14:01:03.026235 1184207 logs.go:282] 1 containers: [37592d3cc643b7da490ad38caaf1d133c7058b03e17c4fc779114e2ea48339f7]
	I0127 14:01:03.026328 1184207 ssh_runner.go:195] Run: which crictl
	I0127 14:01:03.035977 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:01:03.048568 1184207 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 14:01:03.048703 1184207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 14:01:03.084608 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:03.119949 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:01:03.122115 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:03.349821 1184207 cri.go:89] found id: "2ef63a2080626bc93b29eb24e3014baa129149ae33e99dcc19af31dc2d4029f3"
	I0127 14:01:03.349894 1184207 cri.go:89] found id: ""
	I0127 14:01:03.349916 1184207 logs.go:282] 1 containers: [2ef63a2080626bc93b29eb24e3014baa129149ae33e99dcc19af31dc2d4029f3]
	I0127 14:01:03.350006 1184207 ssh_runner.go:195] Run: which crictl
	I0127 14:01:03.362141 1184207 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 14:01:03.362266 1184207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 14:01:03.529361 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:01:03.583959 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:03.615708 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:01:03.617482 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:03.909763 1184207 cri.go:89] found id: "c64cb973388bc0a2909e59b1f91e1db13cb6329bdef0d091f75a11329dd18cd2"
	I0127 14:01:03.909783 1184207 cri.go:89] found id: ""
	I0127 14:01:03.909791 1184207 logs.go:282] 1 containers: [c64cb973388bc0a2909e59b1f91e1db13cb6329bdef0d091f75a11329dd18cd2]
	I0127 14:01:03.909846 1184207 ssh_runner.go:195] Run: which crictl
	I0127 14:01:03.932501 1184207 logs.go:123] Gathering logs for kindnet [c64cb973388bc0a2909e59b1f91e1db13cb6329bdef0d091f75a11329dd18cd2] ...
	I0127 14:01:03.932538 1184207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c64cb973388bc0a2909e59b1f91e1db13cb6329bdef0d091f75a11329dd18cd2"
	I0127 14:01:04.029924 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:01:04.084230 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:04.117048 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:01:04.118384 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:04.506929 1184207 logs.go:123] Gathering logs for CRI-O ...
	I0127 14:01:04.506964 1184207 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 14:01:04.528620 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:01:04.590105 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:04.625558 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:01:04.626515 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:04.659215 1184207 logs.go:123] Gathering logs for kubelet ...
	I0127 14:01:04.659261 1184207 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0127 14:01:04.790529 1184207 logs.go:138] Found kubelet problem: Jan 27 13:59:34 addons-790770 kubelet[1517]: I0127 13:59:34.007996    1517 status_manager.go:890] "Failed to get status for pod" podUID="f92c619f-77a1-4342-9902-988504c16123" pod="kube-system/kube-proxy-5nnw4" err="pods \"kube-proxy-5nnw4\" is forbidden: User \"system:node:addons-790770\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-790770' and this object"
	W0127 14:01:04.796884 1184207 logs.go:138] Found kubelet problem: Jan 27 13:59:34 addons-790770 kubelet[1517]: W0127 13:59:34.008164    1517 reflector.go:569] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-790770" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-790770' and this object
	W0127 14:01:04.797149 1184207 logs.go:138] Found kubelet problem: Jan 27 13:59:34 addons-790770 kubelet[1517]: E0127 13:59:34.008200    1517 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-790770\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-790770' and this object" logger="UnhandledError"
	W0127 14:01:04.797334 1184207 logs.go:138] Found kubelet problem: Jan 27 13:59:34 addons-790770 kubelet[1517]: W0127 13:59:34.008265    1517 reflector.go:569] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-790770" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-790770' and this object
	W0127 14:01:04.797560 1184207 logs.go:138] Found kubelet problem: Jan 27 13:59:34 addons-790770 kubelet[1517]: E0127 13:59:34.008285    1517 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:addons-790770\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-790770' and this object" logger="UnhandledError"
	W0127 14:01:04.826962 1184207 logs.go:138] Found kubelet problem: Jan 27 14:00:18 addons-790770 kubelet[1517]: I0127 14:00:18.181572    1517 status_manager.go:890] "Failed to get status for pod" podUID="86366f18-6468-422c-bb02-21ac021422e5" pod="kube-system/coredns-668d6bf9bc-wsktl" err="pods \"coredns-668d6bf9bc-wsktl\" is forbidden: User \"system:node:addons-790770\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-790770' and this object"
	W0127 14:01:04.827168 1184207 logs.go:138] Found kubelet problem: Jan 27 14:00:18 addons-790770 kubelet[1517]: W0127 14:00:18.181675    1517 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-790770" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-790770' and this object
	W0127 14:01:04.827383 1184207 logs.go:138] Found kubelet problem: Jan 27 14:00:18 addons-790770 kubelet[1517]: E0127 14:00:18.181704    1517 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-790770\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-790770' and this object" logger="UnhandledError"
	W0127 14:01:04.827610 1184207 logs.go:138] Found kubelet problem: Jan 27 14:00:18 addons-790770 kubelet[1517]: I0127 14:00:18.186483    1517 status_manager.go:890] "Failed to get status for pod" podUID="86366f18-6468-422c-bb02-21ac021422e5" pod="kube-system/coredns-668d6bf9bc-wsktl" err="pods \"coredns-668d6bf9bc-wsktl\" is forbidden: User \"system:node:addons-790770\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-790770' and this object"
	W0127 14:01:04.827848 1184207 logs.go:138] Found kubelet problem: Jan 27 14:00:18 addons-790770 kubelet[1517]: I0127 14:00:18.189256    1517 status_manager.go:890] "Failed to get status for pod" podUID="146a2460-1b73-4ec7-81c1-2e2d1b8140aa" pod="kube-system/csi-hostpath-resizer-0" err="pods \"csi-hostpath-resizer-0\" is forbidden: User \"system:node:addons-790770\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-790770' and this object"
	W0127 14:01:04.828044 1184207 logs.go:138] Found kubelet problem: Jan 27 14:00:18 addons-790770 kubelet[1517]: W0127 14:00:18.241865    1517 reflector.go:569] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-790770" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-790770' and this object
	W0127 14:01:04.828321 1184207 logs.go:138] Found kubelet problem: Jan 27 14:00:18 addons-790770 kubelet[1517]: E0127 14:00:18.241916    1517 reflector.go:166] "Unhandled Error" err="object-\"ingress-nginx\"/\"ingress-nginx-admission\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ingress-nginx-admission\" is forbidden: User \"system:node:addons-790770\" cannot list resource \"secrets\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-790770' and this object" logger="UnhandledError"
	W0127 14:01:04.828564 1184207 logs.go:138] Found kubelet problem: Jan 27 14:00:18 addons-790770 kubelet[1517]: W0127 14:00:18.241967    1517 reflector.go:569] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-790770" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-790770' and this object
	W0127 14:01:04.828800 1184207 logs.go:138] Found kubelet problem: Jan 27 14:00:18 addons-790770 kubelet[1517]: E0127 14:00:18.241980    1517 reflector.go:166] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-790770\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-790770' and this object" logger="UnhandledError"
	W0127 14:01:04.829005 1184207 logs.go:138] Found kubelet problem: Jan 27 14:00:18 addons-790770 kubelet[1517]: W0127 14:00:18.242025    1517 reflector.go:569] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-790770" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-790770' and this object
	W0127 14:01:04.829234 1184207 logs.go:138] Found kubelet problem: Jan 27 14:00:18 addons-790770 kubelet[1517]: E0127 14:00:18.242036    1517 reflector.go:166] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-790770\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-790770' and this object" logger="UnhandledError"
	W0127 14:01:04.829419 1184207 logs.go:138] Found kubelet problem: Jan 27 14:00:18 addons-790770 kubelet[1517]: W0127 14:00:18.242084    1517 reflector.go:569] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-790770" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-790770' and this object
	W0127 14:01:04.829653 1184207 logs.go:138] Found kubelet problem: Jan 27 14:00:18 addons-790770 kubelet[1517]: E0127 14:00:18.242096    1517 reflector.go:166] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-790770\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-790770' and this object" logger="UnhandledError"
	I0127 14:01:04.865815 1184207 logs.go:123] Gathering logs for dmesg ...
	I0127 14:01:04.865887 1184207 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 14:01:04.926552 1184207 logs.go:123] Gathering logs for describe nodes ...
	I0127 14:01:04.926580 1184207 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0127 14:01:05.029490 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:01:05.084272 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:05.117399 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:01:05.119105 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:05.239090 1184207 logs.go:123] Gathering logs for etcd [b7a989a14162f7fa77960e45c1d2b7590386cb2f25cb0d69a60e1ef45caabc4f] ...
	I0127 14:01:05.239167 1184207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b7a989a14162f7fa77960e45c1d2b7590386cb2f25cb0d69a60e1ef45caabc4f"
	I0127 14:01:05.365609 1184207 logs.go:123] Gathering logs for kube-scheduler [60863df73259277c6081f4e2e56568c43f6de16503cf7c73b579f8eed272aa01] ...
	I0127 14:01:05.365702 1184207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60863df73259277c6081f4e2e56568c43f6de16503cf7c73b579f8eed272aa01"
	I0127 14:01:05.476658 1184207 logs.go:123] Gathering logs for kube-proxy [37592d3cc643b7da490ad38caaf1d133c7058b03e17c4fc779114e2ea48339f7] ...
	I0127 14:01:05.476737 1184207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 37592d3cc643b7da490ad38caaf1d133c7058b03e17c4fc779114e2ea48339f7"
	I0127 14:01:05.529348 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:01:05.573973 1184207 logs.go:123] Gathering logs for kube-apiserver [9d6052af2899c662705fa6e3722cb09ef2534baea4800d3dd66727c13f733e94] ...
	I0127 14:01:05.574058 1184207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d6052af2899c662705fa6e3722cb09ef2534baea4800d3dd66727c13f733e94"
	I0127 14:01:05.585509 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:05.622325 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:01:05.624395 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:05.747100 1184207 logs.go:123] Gathering logs for coredns [9df93da9b4578b318ebd1c095a4b55308b6aa76560481ef26cb03cc28e6f6a2e] ...
	I0127 14:01:05.747186 1184207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9df93da9b4578b318ebd1c095a4b55308b6aa76560481ef26cb03cc28e6f6a2e"
	I0127 14:01:05.876983 1184207 logs.go:123] Gathering logs for kube-controller-manager [2ef63a2080626bc93b29eb24e3014baa129149ae33e99dcc19af31dc2d4029f3] ...
	I0127 14:01:05.877054 1184207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ef63a2080626bc93b29eb24e3014baa129149ae33e99dcc19af31dc2d4029f3"
	I0127 14:01:05.978130 1184207 logs.go:123] Gathering logs for container status ...
	I0127 14:01:05.978234 1184207 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 14:01:06.029504 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:01:06.064911 1184207 out.go:358] Setting ErrFile to fd 2...
	I0127 14:01:06.064983 1184207 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0127 14:01:06.065073 1184207 out.go:270] X Problems detected in kubelet:
	W0127 14:01:06.065120 1184207 out.go:270]   Jan 27 14:00:18 addons-790770 kubelet[1517]: E0127 14:00:18.241980    1517 reflector.go:166] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-790770\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-790770' and this object" logger="UnhandledError"
	W0127 14:01:06.065161 1184207 out.go:270]   Jan 27 14:00:18 addons-790770 kubelet[1517]: W0127 14:00:18.242025    1517 reflector.go:569] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-790770" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-790770' and this object
	W0127 14:01:06.065222 1184207 out.go:270]   Jan 27 14:00:18 addons-790770 kubelet[1517]: E0127 14:00:18.242036    1517 reflector.go:166] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-790770\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-790770' and this object" logger="UnhandledError"
	W0127 14:01:06.065259 1184207 out.go:270]   Jan 27 14:00:18 addons-790770 kubelet[1517]: W0127 14:00:18.242084    1517 reflector.go:569] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-790770" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-790770' and this object
	W0127 14:01:06.065308 1184207 out.go:270]   Jan 27 14:00:18 addons-790770 kubelet[1517]: E0127 14:00:18.242096    1517 reflector.go:166] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-790770\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-790770' and this object" logger="UnhandledError"
	I0127 14:01:06.065345 1184207 out.go:358] Setting ErrFile to fd 2...
	I0127 14:01:06.065370 1184207 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:01:06.086380 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:06.115304 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:06.116453 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:01:06.539751 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:01:06.643336 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:01:06.644982 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:06.645894 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:07.028979 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:01:07.083214 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:07.116730 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:01:07.117993 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:07.528998 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:01:07.583084 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:07.615831 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:07.618321 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:01:08.029433 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:01:08.083843 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:08.115203 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:01:08.116833 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:08.529410 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:01:08.583679 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:08.617748 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:08.627305 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:01:09.029116 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:01:09.087196 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:09.118707 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:09.125221 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:01:09.533045 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:01:09.583975 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:09.618191 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:01:09.621515 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:10.030335 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:01:10.084150 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:10.116105 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:01:10.116798 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:10.529149 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:01:10.583467 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:10.614455 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:10.617381 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:01:11.029095 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:01:11.084217 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:11.116851 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:01:11.118477 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:11.529390 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:01:11.584475 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:11.617999 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:01:11.619199 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:12.029679 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:01:12.093065 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:12.131812 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:01:12.133421 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:12.529357 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:01:12.583369 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:12.614521 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:01:12.616078 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:13.029155 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:01:13.082869 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:13.114433 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:01:13.115415 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:13.529011 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:01:13.582869 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:13.614283 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:13.615037 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:01:14.029052 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:01:14.083247 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:14.113758 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:14.115868 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:01:14.529169 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:01:14.583385 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:14.614723 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:14.616971 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:01:15.039820 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:01:15.138372 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:15.140046 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:15.140991 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:01:15.530180 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:01:15.583456 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:15.615885 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:01:15.617328 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:16.029167 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:01:16.066450 1184207 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0127 14:01:16.076294 1184207 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0127 14:01:16.077631 1184207 api_server.go:141] control plane version: v1.32.1
	I0127 14:01:16.077701 1184207 api_server.go:131] duration metric: took 14.049402804s to wait for apiserver health ...
	I0127 14:01:16.077724 1184207 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 14:01:16.077774 1184207 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 14:01:16.077856 1184207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 14:01:16.087437 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:16.115615 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:16.118073 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:01:16.160879 1184207 cri.go:89] found id: "9d6052af2899c662705fa6e3722cb09ef2534baea4800d3dd66727c13f733e94"
	I0127 14:01:16.160899 1184207 cri.go:89] found id: ""
	I0127 14:01:16.160907 1184207 logs.go:282] 1 containers: [9d6052af2899c662705fa6e3722cb09ef2534baea4800d3dd66727c13f733e94]
	I0127 14:01:16.160975 1184207 ssh_runner.go:195] Run: which crictl
	I0127 14:01:16.165044 1184207 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 14:01:16.165176 1184207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 14:01:16.224159 1184207 cri.go:89] found id: "b7a989a14162f7fa77960e45c1d2b7590386cb2f25cb0d69a60e1ef45caabc4f"
	I0127 14:01:16.224232 1184207 cri.go:89] found id: ""
	I0127 14:01:16.224269 1184207 logs.go:282] 1 containers: [b7a989a14162f7fa77960e45c1d2b7590386cb2f25cb0d69a60e1ef45caabc4f]
	I0127 14:01:16.224372 1184207 ssh_runner.go:195] Run: which crictl
	I0127 14:01:16.229784 1184207 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 14:01:16.229946 1184207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 14:01:16.292228 1184207 cri.go:89] found id: "9df93da9b4578b318ebd1c095a4b55308b6aa76560481ef26cb03cc28e6f6a2e"
	I0127 14:01:16.292306 1184207 cri.go:89] found id: ""
	I0127 14:01:16.292329 1184207 logs.go:282] 1 containers: [9df93da9b4578b318ebd1c095a4b55308b6aa76560481ef26cb03cc28e6f6a2e]
	I0127 14:01:16.292422 1184207 ssh_runner.go:195] Run: which crictl
	I0127 14:01:16.296657 1184207 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 14:01:16.296743 1184207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 14:01:16.346892 1184207 cri.go:89] found id: "60863df73259277c6081f4e2e56568c43f6de16503cf7c73b579f8eed272aa01"
	I0127 14:01:16.346917 1184207 cri.go:89] found id: ""
	I0127 14:01:16.346925 1184207 logs.go:282] 1 containers: [60863df73259277c6081f4e2e56568c43f6de16503cf7c73b579f8eed272aa01]
	I0127 14:01:16.346997 1184207 ssh_runner.go:195] Run: which crictl
	I0127 14:01:16.351439 1184207 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 14:01:16.351577 1184207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 14:01:16.395302 1184207 cri.go:89] found id: "37592d3cc643b7da490ad38caaf1d133c7058b03e17c4fc779114e2ea48339f7"
	I0127 14:01:16.395335 1184207 cri.go:89] found id: ""
	I0127 14:01:16.395344 1184207 logs.go:282] 1 containers: [37592d3cc643b7da490ad38caaf1d133c7058b03e17c4fc779114e2ea48339f7]
	I0127 14:01:16.395410 1184207 ssh_runner.go:195] Run: which crictl
	I0127 14:01:16.401284 1184207 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 14:01:16.401403 1184207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 14:01:16.452651 1184207 cri.go:89] found id: "2ef63a2080626bc93b29eb24e3014baa129149ae33e99dcc19af31dc2d4029f3"
	I0127 14:01:16.452725 1184207 cri.go:89] found id: ""
	I0127 14:01:16.452747 1184207 logs.go:282] 1 containers: [2ef63a2080626bc93b29eb24e3014baa129149ae33e99dcc19af31dc2d4029f3]
	I0127 14:01:16.452874 1184207 ssh_runner.go:195] Run: which crictl
	I0127 14:01:16.457104 1184207 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 14:01:16.457221 1184207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 14:01:16.504406 1184207 cri.go:89] found id: "c64cb973388bc0a2909e59b1f91e1db13cb6329bdef0d091f75a11329dd18cd2"
	I0127 14:01:16.504471 1184207 cri.go:89] found id: ""
	I0127 14:01:16.504494 1184207 logs.go:282] 1 containers: [c64cb973388bc0a2909e59b1f91e1db13cb6329bdef0d091f75a11329dd18cd2]
	I0127 14:01:16.504591 1184207 ssh_runner.go:195] Run: which crictl
	I0127 14:01:16.508448 1184207 logs.go:123] Gathering logs for kube-apiserver [9d6052af2899c662705fa6e3722cb09ef2534baea4800d3dd66727c13f733e94] ...
	I0127 14:01:16.508545 1184207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d6052af2899c662705fa6e3722cb09ef2534baea4800d3dd66727c13f733e94"
	I0127 14:01:16.529170 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:01:16.565963 1184207 logs.go:123] Gathering logs for etcd [b7a989a14162f7fa77960e45c1d2b7590386cb2f25cb0d69a60e1ef45caabc4f] ...
	I0127 14:01:16.565997 1184207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b7a989a14162f7fa77960e45c1d2b7590386cb2f25cb0d69a60e1ef45caabc4f"
	I0127 14:01:16.583517 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:16.618071 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:01:16.620151 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:16.642435 1184207 logs.go:123] Gathering logs for kube-scheduler [60863df73259277c6081f4e2e56568c43f6de16503cf7c73b579f8eed272aa01] ...
	I0127 14:01:16.642472 1184207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60863df73259277c6081f4e2e56568c43f6de16503cf7c73b579f8eed272aa01"
	I0127 14:01:16.736745 1184207 logs.go:123] Gathering logs for kube-controller-manager [2ef63a2080626bc93b29eb24e3014baa129149ae33e99dcc19af31dc2d4029f3] ...
	I0127 14:01:16.736783 1184207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ef63a2080626bc93b29eb24e3014baa129149ae33e99dcc19af31dc2d4029f3"
	I0127 14:01:16.817554 1184207 logs.go:123] Gathering logs for kindnet [c64cb973388bc0a2909e59b1f91e1db13cb6329bdef0d091f75a11329dd18cd2] ...
	I0127 14:01:16.817594 1184207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c64cb973388bc0a2909e59b1f91e1db13cb6329bdef0d091f75a11329dd18cd2"
	I0127 14:01:16.859029 1184207 logs.go:123] Gathering logs for kubelet ...
	I0127 14:01:16.859056 1184207 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0127 14:01:16.917858 1184207 logs.go:138] Found kubelet problem: Jan 27 13:59:34 addons-790770 kubelet[1517]: I0127 13:59:34.007996    1517 status_manager.go:890] "Failed to get status for pod" podUID="f92c619f-77a1-4342-9902-988504c16123" pod="kube-system/kube-proxy-5nnw4" err="pods \"kube-proxy-5nnw4\" is forbidden: User \"system:node:addons-790770\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-790770' and this object"
	W0127 14:01:16.918078 1184207 logs.go:138] Found kubelet problem: Jan 27 13:59:34 addons-790770 kubelet[1517]: W0127 13:59:34.008164    1517 reflector.go:569] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-790770" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-790770' and this object
	W0127 14:01:16.918332 1184207 logs.go:138] Found kubelet problem: Jan 27 13:59:34 addons-790770 kubelet[1517]: E0127 13:59:34.008200    1517 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-790770\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-790770' and this object" logger="UnhandledError"
	W0127 14:01:16.918525 1184207 logs.go:138] Found kubelet problem: Jan 27 13:59:34 addons-790770 kubelet[1517]: W0127 13:59:34.008265    1517 reflector.go:569] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-790770" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-790770' and this object
	W0127 14:01:16.918766 1184207 logs.go:138] Found kubelet problem: Jan 27 13:59:34 addons-790770 kubelet[1517]: E0127 13:59:34.008285    1517 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:addons-790770\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-790770' and this object" logger="UnhandledError"
	W0127 14:01:16.943275 1184207 logs.go:138] Found kubelet problem: Jan 27 14:00:18 addons-790770 kubelet[1517]: I0127 14:00:18.181572    1517 status_manager.go:890] "Failed to get status for pod" podUID="86366f18-6468-422c-bb02-21ac021422e5" pod="kube-system/coredns-668d6bf9bc-wsktl" err="pods \"coredns-668d6bf9bc-wsktl\" is forbidden: User \"system:node:addons-790770\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-790770' and this object"
	W0127 14:01:16.943510 1184207 logs.go:138] Found kubelet problem: Jan 27 14:00:18 addons-790770 kubelet[1517]: W0127 14:00:18.181675    1517 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-790770" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-790770' and this object
	W0127 14:01:16.943737 1184207 logs.go:138] Found kubelet problem: Jan 27 14:00:18 addons-790770 kubelet[1517]: E0127 14:00:18.181704    1517 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-790770\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-790770' and this object" logger="UnhandledError"
	W0127 14:01:16.943998 1184207 logs.go:138] Found kubelet problem: Jan 27 14:00:18 addons-790770 kubelet[1517]: I0127 14:00:18.186483    1517 status_manager.go:890] "Failed to get status for pod" podUID="86366f18-6468-422c-bb02-21ac021422e5" pod="kube-system/coredns-668d6bf9bc-wsktl" err="pods \"coredns-668d6bf9bc-wsktl\" is forbidden: User \"system:node:addons-790770\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-790770' and this object"
	W0127 14:01:16.944221 1184207 logs.go:138] Found kubelet problem: Jan 27 14:00:18 addons-790770 kubelet[1517]: I0127 14:00:18.189256    1517 status_manager.go:890] "Failed to get status for pod" podUID="146a2460-1b73-4ec7-81c1-2e2d1b8140aa" pod="kube-system/csi-hostpath-resizer-0" err="pods \"csi-hostpath-resizer-0\" is forbidden: User \"system:node:addons-790770\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-790770' and this object"
	W0127 14:01:16.944414 1184207 logs.go:138] Found kubelet problem: Jan 27 14:00:18 addons-790770 kubelet[1517]: W0127 14:00:18.241865    1517 reflector.go:569] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-790770" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-790770' and this object
	W0127 14:01:16.944640 1184207 logs.go:138] Found kubelet problem: Jan 27 14:00:18 addons-790770 kubelet[1517]: E0127 14:00:18.241916    1517 reflector.go:166] "Unhandled Error" err="object-\"ingress-nginx\"/\"ingress-nginx-admission\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ingress-nginx-admission\" is forbidden: User \"system:node:addons-790770\" cannot list resource \"secrets\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-790770' and this object" logger="UnhandledError"
	W0127 14:01:16.944853 1184207 logs.go:138] Found kubelet problem: Jan 27 14:00:18 addons-790770 kubelet[1517]: W0127 14:00:18.241967    1517 reflector.go:569] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-790770" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-790770' and this object
	W0127 14:01:16.945083 1184207 logs.go:138] Found kubelet problem: Jan 27 14:00:18 addons-790770 kubelet[1517]: E0127 14:00:18.241980    1517 reflector.go:166] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-790770\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-790770' and this object" logger="UnhandledError"
	W0127 14:01:16.945269 1184207 logs.go:138] Found kubelet problem: Jan 27 14:00:18 addons-790770 kubelet[1517]: W0127 14:00:18.242025    1517 reflector.go:569] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-790770" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-790770' and this object
	W0127 14:01:16.945502 1184207 logs.go:138] Found kubelet problem: Jan 27 14:00:18 addons-790770 kubelet[1517]: E0127 14:00:18.242036    1517 reflector.go:166] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-790770\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-790770' and this object" logger="UnhandledError"
	W0127 14:01:16.945688 1184207 logs.go:138] Found kubelet problem: Jan 27 14:00:18 addons-790770 kubelet[1517]: W0127 14:00:18.242084    1517 reflector.go:569] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-790770" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-790770' and this object
	W0127 14:01:16.945913 1184207 logs.go:138] Found kubelet problem: Jan 27 14:00:18 addons-790770 kubelet[1517]: E0127 14:00:18.242096    1517 reflector.go:166] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-790770\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-790770' and this object" logger="UnhandledError"
	I0127 14:01:16.984044 1184207 logs.go:123] Gathering logs for dmesg ...
	I0127 14:01:16.984095 1184207 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 14:01:17.001567 1184207 logs.go:123] Gathering logs for describe nodes ...
	I0127 14:01:17.001645 1184207 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0127 14:01:17.029893 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:01:17.130082 1184207 kapi.go:107] duration metric: took 1m36.519226842s to wait for kubernetes.io/minikube-addons=registry ...
	I0127 14:01:17.132492 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:17.135570 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:17.182861 1184207 logs.go:123] Gathering logs for container status ...
	I0127 14:01:17.182895 1184207 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 14:01:17.232333 1184207 logs.go:123] Gathering logs for coredns [9df93da9b4578b318ebd1c095a4b55308b6aa76560481ef26cb03cc28e6f6a2e] ...
	I0127 14:01:17.232371 1184207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9df93da9b4578b318ebd1c095a4b55308b6aa76560481ef26cb03cc28e6f6a2e"
	I0127 14:01:17.274869 1184207 logs.go:123] Gathering logs for kube-proxy [37592d3cc643b7da490ad38caaf1d133c7058b03e17c4fc779114e2ea48339f7] ...
	I0127 14:01:17.274898 1184207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 37592d3cc643b7da490ad38caaf1d133c7058b03e17c4fc779114e2ea48339f7"
	I0127 14:01:17.319637 1184207 logs.go:123] Gathering logs for CRI-O ...
	I0127 14:01:17.319665 1184207 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 14:01:17.432640 1184207 out.go:358] Setting ErrFile to fd 2...
	I0127 14:01:17.432675 1184207 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0127 14:01:17.432761 1184207 out.go:270] X Problems detected in kubelet:
	W0127 14:01:17.432792 1184207 out.go:270]   Jan 27 14:00:18 addons-790770 kubelet[1517]: E0127 14:00:18.241980    1517 reflector.go:166] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-790770\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-790770' and this object" logger="UnhandledError"
	W0127 14:01:17.432836 1184207 out.go:270]   Jan 27 14:00:18 addons-790770 kubelet[1517]: W0127 14:00:18.242025    1517 reflector.go:569] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-790770" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-790770' and this object
	W0127 14:01:17.432857 1184207 out.go:270]   Jan 27 14:00:18 addons-790770 kubelet[1517]: E0127 14:00:18.242036    1517 reflector.go:166] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-790770\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-790770' and this object" logger="UnhandledError"
	W0127 14:01:17.432870 1184207 out.go:270]   Jan 27 14:00:18 addons-790770 kubelet[1517]: W0127 14:00:18.242084    1517 reflector.go:569] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-790770" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-790770' and this object
	W0127 14:01:17.432877 1184207 out.go:270]   Jan 27 14:00:18 addons-790770 kubelet[1517]: E0127 14:00:18.242096    1517 reflector.go:166] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-790770\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-790770' and this object" logger="UnhandledError"
	I0127 14:01:17.432888 1184207 out.go:358] Setting ErrFile to fd 2...
	I0127 14:01:17.432895 1184207 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:01:17.530035 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:01:17.584432 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:17.615707 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:18.030055 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:01:18.135196 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:18.137332 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:18.529014 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:01:18.582934 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:18.614444 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:19.028532 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:01:19.084831 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:19.128324 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:19.528426 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:01:19.583731 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:19.614678 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:20.028684 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:01:20.087182 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:20.115930 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:20.528451 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:01:20.583363 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:20.614435 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:21.028989 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:01:21.096539 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:21.117698 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:21.532554 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:01:21.585132 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:21.632869 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:22.029399 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:01:22.083346 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:22.114604 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:22.528354 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:01:22.583844 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:22.614071 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:23.028075 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:01:23.083270 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:23.114028 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:23.529853 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:01:23.583696 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:23.614092 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:24.032676 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:01:24.084112 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:24.120957 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:24.529681 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:01:24.583942 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:24.615456 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:25.028848 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:01:25.086294 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:25.115135 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:25.529178 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:01:25.583094 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:25.614170 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:26.032736 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:01:26.088152 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:26.114718 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:26.528250 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:01:26.583259 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:26.614452 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:27.029539 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:01:27.082780 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:27.131437 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:27.444212 1184207 system_pods.go:59] 18 kube-system pods found
	I0127 14:01:27.444255 1184207 system_pods.go:61] "coredns-668d6bf9bc-wsktl" [86366f18-6468-422c-bb02-21ac021422e5] Running
	I0127 14:01:27.444262 1184207 system_pods.go:61] "csi-hostpath-attacher-0" [dc505d9e-ad08-45df-923f-526ca4c082be] Running
	I0127 14:01:27.444267 1184207 system_pods.go:61] "csi-hostpath-resizer-0" [146a2460-1b73-4ec7-81c1-2e2d1b8140aa] Running
	I0127 14:01:27.444275 1184207 system_pods.go:61] "csi-hostpathplugin-jwt2n" [d03179bb-7384-49d4-ab61-9e0401808c7f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0127 14:01:27.444280 1184207 system_pods.go:61] "etcd-addons-790770" [f4dfb592-526a-4652-804c-dfee5d9c1796] Running
	I0127 14:01:27.444285 1184207 system_pods.go:61] "kindnet-fvfnt" [c358e53a-5888-4112-a75c-3e0d890ddc3b] Running
	I0127 14:01:27.444289 1184207 system_pods.go:61] "kube-apiserver-addons-790770" [cd5473a2-8476-425c-be34-8b6edb443d7a] Running
	I0127 14:01:27.444293 1184207 system_pods.go:61] "kube-controller-manager-addons-790770" [69ca4c25-a09d-4930-8726-5ef8172098ec] Running
	I0127 14:01:27.444297 1184207 system_pods.go:61] "kube-ingress-dns-minikube" [dec90490-aecc-4092-9329-12cd874952be] Running
	I0127 14:01:27.444301 1184207 system_pods.go:61] "kube-proxy-5nnw4" [f92c619f-77a1-4342-9902-988504c16123] Running
	I0127 14:01:27.444306 1184207 system_pods.go:61] "kube-scheduler-addons-790770" [c25f2017-080b-4b27-a8f6-409e7e20fffd] Running
	I0127 14:01:27.444310 1184207 system_pods.go:61] "metrics-server-7fbb699795-xwjq9" [86fa6622-b0d8-40d1-a078-fb3cd93b374c] Running
	I0127 14:01:27.444314 1184207 system_pods.go:61] "nvidia-device-plugin-daemonset-t85g9" [fd20b544-75f0-46ac-beee-0f2d1020bcb4] Running
	I0127 14:01:27.444317 1184207 system_pods.go:61] "registry-6c88467877-7mfnl" [8373b1bf-7553-43a4-bc9f-a9a0f3320699] Running
	I0127 14:01:27.444327 1184207 system_pods.go:61] "registry-proxy-s4zc7" [43da24c2-fd2a-4d7f-b8ec-be818eb4ec64] Running
	I0127 14:01:27.444332 1184207 system_pods.go:61] "snapshot-controller-68b874b76f-t4rdn" [52c4b1f1-9978-4720-9f3f-c006aa1f63f2] Running
	I0127 14:01:27.444335 1184207 system_pods.go:61] "snapshot-controller-68b874b76f-xrfm4" [c3e041f7-9ef4-437f-9239-4059100957bb] Running
	I0127 14:01:27.444339 1184207 system_pods.go:61] "storage-provisioner" [2c034f9b-d318-43f6-affb-8197ebbbf9e2] Running
	I0127 14:01:27.444349 1184207 system_pods.go:74] duration metric: took 11.366605741s to wait for pod list to return data ...
	I0127 14:01:27.444357 1184207 default_sa.go:34] waiting for default service account to be created ...
	I0127 14:01:27.447069 1184207 default_sa.go:45] found service account: "default"
	I0127 14:01:27.447098 1184207 default_sa.go:55] duration metric: took 2.730899ms for default service account to be created ...
	I0127 14:01:27.447109 1184207 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 14:01:27.457171 1184207 system_pods.go:87] 18 kube-system pods found
	I0127 14:01:27.460167 1184207 system_pods.go:105] "coredns-668d6bf9bc-wsktl" [86366f18-6468-422c-bb02-21ac021422e5] Running
	I0127 14:01:27.460192 1184207 system_pods.go:105] "csi-hostpath-attacher-0" [dc505d9e-ad08-45df-923f-526ca4c082be] Running
	I0127 14:01:27.460198 1184207 system_pods.go:105] "csi-hostpath-resizer-0" [146a2460-1b73-4ec7-81c1-2e2d1b8140aa] Running
	I0127 14:01:27.460209 1184207 system_pods.go:105] "csi-hostpathplugin-jwt2n" [d03179bb-7384-49d4-ab61-9e0401808c7f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0127 14:01:27.460214 1184207 system_pods.go:105] "etcd-addons-790770" [f4dfb592-526a-4652-804c-dfee5d9c1796] Running
	I0127 14:01:27.460220 1184207 system_pods.go:105] "kindnet-fvfnt" [c358e53a-5888-4112-a75c-3e0d890ddc3b] Running
	I0127 14:01:27.460225 1184207 system_pods.go:105] "kube-apiserver-addons-790770" [cd5473a2-8476-425c-be34-8b6edb443d7a] Running
	I0127 14:01:27.460230 1184207 system_pods.go:105] "kube-controller-manager-addons-790770" [69ca4c25-a09d-4930-8726-5ef8172098ec] Running
	I0127 14:01:27.460235 1184207 system_pods.go:105] "kube-ingress-dns-minikube" [dec90490-aecc-4092-9329-12cd874952be] Running
	I0127 14:01:27.460246 1184207 system_pods.go:105] "kube-proxy-5nnw4" [f92c619f-77a1-4342-9902-988504c16123] Running
	I0127 14:01:27.460251 1184207 system_pods.go:105] "kube-scheduler-addons-790770" [c25f2017-080b-4b27-a8f6-409e7e20fffd] Running
	I0127 14:01:27.460255 1184207 system_pods.go:105] "metrics-server-7fbb699795-xwjq9" [86fa6622-b0d8-40d1-a078-fb3cd93b374c] Running
	I0127 14:01:27.460260 1184207 system_pods.go:105] "nvidia-device-plugin-daemonset-t85g9" [fd20b544-75f0-46ac-beee-0f2d1020bcb4] Running
	I0127 14:01:27.460264 1184207 system_pods.go:105] "registry-6c88467877-7mfnl" [8373b1bf-7553-43a4-bc9f-a9a0f3320699] Running
	I0127 14:01:27.460270 1184207 system_pods.go:105] "registry-proxy-s4zc7" [43da24c2-fd2a-4d7f-b8ec-be818eb4ec64] Running
	I0127 14:01:27.460275 1184207 system_pods.go:105] "snapshot-controller-68b874b76f-t4rdn" [52c4b1f1-9978-4720-9f3f-c006aa1f63f2] Running
	I0127 14:01:27.460279 1184207 system_pods.go:105] "snapshot-controller-68b874b76f-xrfm4" [c3e041f7-9ef4-437f-9239-4059100957bb] Running
	I0127 14:01:27.460283 1184207 system_pods.go:105] "storage-provisioner" [2c034f9b-d318-43f6-affb-8197ebbbf9e2] Running
	I0127 14:01:27.460289 1184207 system_pods.go:147] duration metric: took 13.175091ms to wait for k8s-apps to be running ...
	I0127 14:01:27.460295 1184207 system_svc.go:44] waiting for kubelet service to be running ....
	I0127 14:01:27.460351 1184207 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 14:01:27.472451 1184207 system_svc.go:56] duration metric: took 12.145762ms WaitForService to wait for kubelet
	I0127 14:01:27.472481 1184207 kubeadm.go:582] duration metric: took 1m53.104223983s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 14:01:27.472502 1184207 node_conditions.go:102] verifying NodePressure condition ...
	I0127 14:01:27.475921 1184207 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0127 14:01:27.475954 1184207 node_conditions.go:123] node cpu capacity is 2
	I0127 14:01:27.475967 1184207 node_conditions.go:105] duration metric: took 3.458737ms to run NodePressure ...
	I0127 14:01:27.475981 1184207 start.go:241] waiting for startup goroutines ...
	I0127 14:01:27.528794 1184207 kapi.go:107] duration metric: took 1m42.004009102s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0127 14:01:27.532077 1184207 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-790770 cluster.
	I0127 14:01:27.534957 1184207 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0127 14:01:27.537818 1184207 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0127 14:01:27.583316 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:27.614056 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:28.083428 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:28.114601 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:28.582926 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:28.613620 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:29.082381 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:29.115228 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:29.582982 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:29.614790 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:30.083403 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:30.114927 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:30.583139 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:30.614302 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:31.083555 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:31.114883 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:31.582952 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:31.614670 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:32.082793 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:32.121483 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:32.582597 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:32.613748 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:33.082566 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:33.114892 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:33.582457 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:33.614649 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:34.085461 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:34.114121 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:34.582984 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:34.613954 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:35.083804 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:35.114339 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:35.583445 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:35.614534 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:36.084148 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:36.117057 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:36.582834 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:36.613896 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:37.082888 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:37.116405 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:37.583245 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:37.614783 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:38.084721 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:38.113988 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:38.582809 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:38.613822 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:39.082432 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:39.114346 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:39.583400 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:39.615128 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:40.083266 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:40.114565 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:40.583978 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:40.614092 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:41.083664 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:41.114015 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:41.582856 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:41.614072 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:42.083763 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:42.115582 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:42.583688 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:42.614372 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:43.084760 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:43.114700 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:43.582980 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:43.614107 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:44.083116 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:44.113992 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:44.582941 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:44.614285 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:45.101245 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:45.115979 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:45.583127 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:45.613872 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:46.082962 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:46.114554 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:46.582628 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:46.614164 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:47.084381 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:47.114797 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:47.582890 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:47.614776 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:48.082866 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:48.114145 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:48.583649 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:48.613817 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:49.082675 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:49.113658 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:49.582641 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:49.613603 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:50.085768 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:50.114239 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:50.583788 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:50.613766 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:51.083093 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:51.114343 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:51.583212 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:51.614414 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:52.083654 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:52.184446 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:52.583202 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:52.614520 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:53.083580 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:53.115223 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:53.584716 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:53.613558 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:54.082776 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:54.114558 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:54.583300 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:54.614664 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:55.083149 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:55.114581 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:55.584049 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:55.613727 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:56.086051 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:56.125844 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:56.583291 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:56.614575 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:57.083766 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:57.113935 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:57.582932 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:57.614237 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:58.083516 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:58.114522 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:58.583864 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:58.613976 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:59.082556 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:59.114998 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:01:59.583106 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:01:59.614775 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:02:00.084046 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:02:00.125065 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:02:00.583164 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:02:00.614497 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:02:01.084075 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:02:01.114134 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:02:01.583477 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:02:01.614711 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:02:02.082695 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:02:02.114511 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:02:02.583837 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:02:02.614730 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:02:03.083329 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:02:03.114132 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:02:03.583315 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:02:03.614470 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:02:04.083576 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:02:04.115131 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:02:04.583058 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:02:04.613814 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:02:05.082892 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:02:05.114756 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:02:05.583434 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:02:05.614415 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:02:06.088274 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:02:06.116611 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:02:06.583788 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:02:06.624080 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:02:07.083184 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:02:07.114155 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:02:07.582676 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:02:07.614424 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:02:08.084298 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:02:08.115218 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:02:08.587053 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:02:08.615860 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:02:09.084307 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:02:09.115673 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:02:09.583262 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:02:09.615966 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:02:10.083994 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:02:10.116358 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:02:10.584391 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:02:10.615625 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:02:11.085008 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:02:11.115021 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:02:11.583152 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:02:11.613992 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:02:12.083702 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:02:12.114861 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:02:12.584089 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:02:12.614674 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:02:13.083332 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:02:13.114763 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:02:13.583036 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:02:13.613948 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:02:14.083227 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:02:14.114793 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:02:14.583707 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:02:14.621214 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:02:15.103292 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:02:15.129919 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:02:15.583210 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:02:15.614162 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:02:16.084083 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:02:16.115344 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:02:16.583881 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:02:16.617134 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:02:17.083911 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:02:17.116838 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:02:17.584305 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:02:17.615931 1184207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:02:18.085249 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:02:18.115310 1184207 kapi.go:107] duration metric: took 2m37.505625384s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0127 14:02:18.583200 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:02:19.084431 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:02:19.584327 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:02:20.083347 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:02:20.583783 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:02:21.084332 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:02:21.584098 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:02:22.083257 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:02:22.585484 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:02:23.082690 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:02:23.585192 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:02:24.085171 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:02:24.583098 1184207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:02:25.084120 1184207 kapi.go:107] duration metric: took 2m44.006084959s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0127 14:02:25.087291 1184207 out.go:177] * Enabled addons: nvidia-device-plugin, storage-provisioner, amd-gpu-device-plugin, ingress-dns, inspektor-gadget, cloud-spanner, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I0127 14:02:25.090204 1184207 addons.go:514] duration metric: took 2m50.721465012s for enable addons: enabled=[nvidia-device-plugin storage-provisioner amd-gpu-device-plugin ingress-dns inspektor-gadget cloud-spanner metrics-server yakd storage-provisioner-rancher volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I0127 14:02:25.090274 1184207 start.go:246] waiting for cluster config update ...
	I0127 14:02:25.090299 1184207 start.go:255] writing updated cluster config ...
	I0127 14:02:25.090672 1184207 ssh_runner.go:195] Run: rm -f paused
	I0127 14:02:25.486742 1184207 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0127 14:02:25.489882 1184207 out.go:177] * Done! kubectl is now configured to use "addons-790770" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 27 14:05:36 addons-790770 crio[973]: time="2025-01-27 14:05:36.438705437Z" level=info msg="Removed container 1170f8dfbe9bf34a58a879489696b458ece6077f9c34f179ebe21aec5219c6a0: default/cloud-spanner-emulator-5d76cffbc-gjk99/cloud-spanner-emulator" id=c57ffddf-a686-433d-9246-6103c6327ca1 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 27 14:06:08 addons-790770 crio[973]: time="2025-01-27 14:06:08.121489859Z" level=info msg="Running pod sandbox: default/hello-world-app-7d9564db4-v2dpt/POD" id=c6a01412-495f-46b6-bb08-673874681c73 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 27 14:06:08 addons-790770 crio[973]: time="2025-01-27 14:06:08.121548247Z" level=warning msg="Allowed annotations are specified for workload []"
	Jan 27 14:06:08 addons-790770 crio[973]: time="2025-01-27 14:06:08.155661984Z" level=info msg="Got pod network &{Name:hello-world-app-7d9564db4-v2dpt Namespace:default ID:f39e563f6c86f9a10f5e5b7662c56975366b6fd32e838a447d7a157a7080a916 UID:17fc014f-90ec-4a66-a6d6-4e958f261728 NetNS:/var/run/netns/90929853-5e5f-4338-b736-1d7ec6db75f0 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jan 27 14:06:08 addons-790770 crio[973]: time="2025-01-27 14:06:08.155707498Z" level=info msg="Adding pod default_hello-world-app-7d9564db4-v2dpt to CNI network \"kindnet\" (type=ptp)"
	Jan 27 14:06:08 addons-790770 crio[973]: time="2025-01-27 14:06:08.178812362Z" level=info msg="Got pod network &{Name:hello-world-app-7d9564db4-v2dpt Namespace:default ID:f39e563f6c86f9a10f5e5b7662c56975366b6fd32e838a447d7a157a7080a916 UID:17fc014f-90ec-4a66-a6d6-4e958f261728 NetNS:/var/run/netns/90929853-5e5f-4338-b736-1d7ec6db75f0 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jan 27 14:06:08 addons-790770 crio[973]: time="2025-01-27 14:06:08.178981405Z" level=info msg="Checking pod default_hello-world-app-7d9564db4-v2dpt for CNI network kindnet (type=ptp)"
	Jan 27 14:06:08 addons-790770 crio[973]: time="2025-01-27 14:06:08.185671385Z" level=info msg="Ran pod sandbox f39e563f6c86f9a10f5e5b7662c56975366b6fd32e838a447d7a157a7080a916 with infra container: default/hello-world-app-7d9564db4-v2dpt/POD" id=c6a01412-495f-46b6-bb08-673874681c73 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 27 14:06:08 addons-790770 crio[973]: time="2025-01-27 14:06:08.191654047Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=35ceebba-5e7f-4be1-8382-3ba73021e2e4 name=/runtime.v1.ImageService/ImageStatus
	Jan 27 14:06:08 addons-790770 crio[973]: time="2025-01-27 14:06:08.191897002Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=35ceebba-5e7f-4be1-8382-3ba73021e2e4 name=/runtime.v1.ImageService/ImageStatus
	Jan 27 14:06:08 addons-790770 crio[973]: time="2025-01-27 14:06:08.192778786Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=b0607f8c-fda4-4874-81f8-900632dd688f name=/runtime.v1.ImageService/PullImage
	Jan 27 14:06:08 addons-790770 crio[973]: time="2025-01-27 14:06:08.195304195Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Jan 27 14:06:08 addons-790770 crio[973]: time="2025-01-27 14:06:08.555133571Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Jan 27 14:06:09 addons-790770 crio[973]: time="2025-01-27 14:06:09.413347366Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6" id=b0607f8c-fda4-4874-81f8-900632dd688f name=/runtime.v1.ImageService/PullImage
	Jan 27 14:06:09 addons-790770 crio[973]: time="2025-01-27 14:06:09.415476630Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=63d6b781-5d70-4420-bf02-7e10bf4d2243 name=/runtime.v1.ImageService/ImageStatus
	Jan 27 14:06:09 addons-790770 crio[973]: time="2025-01-27 14:06:09.416218508Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b],Size_:4789170,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=63d6b781-5d70-4420-bf02-7e10bf4d2243 name=/runtime.v1.ImageService/ImageStatus
	Jan 27 14:06:09 addons-790770 crio[973]: time="2025-01-27 14:06:09.417199607Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=840c3046-6de5-41d5-82c6-db5334481b28 name=/runtime.v1.ImageService/ImageStatus
	Jan 27 14:06:09 addons-790770 crio[973]: time="2025-01-27 14:06:09.418003935Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b],Size_:4789170,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=840c3046-6de5-41d5-82c6-db5334481b28 name=/runtime.v1.ImageService/ImageStatus
	Jan 27 14:06:09 addons-790770 crio[973]: time="2025-01-27 14:06:09.419180317Z" level=info msg="Creating container: default/hello-world-app-7d9564db4-v2dpt/hello-world-app" id=b505f98d-ae58-43cb-b1ae-3a77441eb04a name=/runtime.v1.RuntimeService/CreateContainer
	Jan 27 14:06:09 addons-790770 crio[973]: time="2025-01-27 14:06:09.419271616Z" level=warning msg="Allowed annotations are specified for workload []"
	Jan 27 14:06:09 addons-790770 crio[973]: time="2025-01-27 14:06:09.442181418Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/9d1134b3763c945439509ac754fb542fab89976c28e8cdd3c543c8579270b033/merged/etc/passwd: no such file or directory"
	Jan 27 14:06:09 addons-790770 crio[973]: time="2025-01-27 14:06:09.442223765Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/9d1134b3763c945439509ac754fb542fab89976c28e8cdd3c543c8579270b033/merged/etc/group: no such file or directory"
	Jan 27 14:06:09 addons-790770 crio[973]: time="2025-01-27 14:06:09.495178546Z" level=info msg="Created container 0a1858a45ac036ee00911a32d8d66d4df0a22e3088aa4448df4afd902acd912e: default/hello-world-app-7d9564db4-v2dpt/hello-world-app" id=b505f98d-ae58-43cb-b1ae-3a77441eb04a name=/runtime.v1.RuntimeService/CreateContainer
	Jan 27 14:06:09 addons-790770 crio[973]: time="2025-01-27 14:06:09.496082952Z" level=info msg="Starting container: 0a1858a45ac036ee00911a32d8d66d4df0a22e3088aa4448df4afd902acd912e" id=c21feea4-7f24-46c2-beac-939973a658e0 name=/runtime.v1.RuntimeService/StartContainer
	Jan 27 14:06:09 addons-790770 crio[973]: time="2025-01-27 14:06:09.505864546Z" level=info msg="Started container" PID=8938 containerID=0a1858a45ac036ee00911a32d8d66d4df0a22e3088aa4448df4afd902acd912e description=default/hello-world-app-7d9564db4-v2dpt/hello-world-app id=c21feea4-7f24-46c2-beac-939973a658e0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f39e563f6c86f9a10f5e5b7662c56975366b6fd32e838a447d7a157a7080a916
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD
	0a1858a45ac03       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        Less than a second ago   Running             hello-world-app           0                   f39e563f6c86f       hello-world-app-7d9564db4-v2dpt
	c6bb7935637f1       docker.io/library/nginx@sha256:4338a8ba9b9962d07e30e7ff4bbf27d62ee7523deb7205e8f0912169f1bbac10                              2 minutes ago            Running             nginx                     0                   95c913a14e6a7       nginx
	b3d67b3fbcbd8       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago            Running             busybox                   0                   0c4c400e2cbc5       busybox
	f1c057d1256fb       registry.k8s.io/ingress-nginx/controller@sha256:787a5408fa511266888b2e765f9666bee67d9bf2518a6b7cfd4ab6cc01c22eee             3 minutes ago            Running             controller                0                   7ff94906854b9       ingress-nginx-controller-56d7c84fd4-tdtlm
	3d36e58eda640       d54655ed3a8543a162b688a24bf969ee1a28d906b8ccb30188059247efdae234                                                             4 minutes ago            Exited              patch                     2                   a8db30456ddf6       ingress-nginx-admission-patch-l5mt7
	37b69d248786a       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:0550b75a965592f1dde3fbeaa98f67a1e10c5a086bcd69a29054cc4edcb56771   4 minutes ago            Exited              create                    0                   4bd6b768c902a       ingress-nginx-admission-create-fwgb5
	58a73822df00f       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c             5 minutes ago            Running             minikube-ingress-dns      0                   1cb432bf084cd       kube-ingress-dns-minikube
	9df93da9b4578       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4                                                             5 minutes ago            Running             coredns                   0                   49d5c13b1eed0       coredns-668d6bf9bc-wsktl
	7269561d7328a       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             5 minutes ago            Running             storage-provisioner       0                   5fa1256d5be39       storage-provisioner
	37592d3cc643b       e124fbed851d756107a6153db4dc52269a2fd34af3cc46f00a2ef113f868aab0                                                             6 minutes ago            Running             kube-proxy                0                   fbceb9ec36faa       kube-proxy-5nnw4
	c64cb973388bc       2be0bcf609c6573ee83e676c747f31bda661ab2d4e039c51839e38fd258d2903                                                             6 minutes ago            Running             kindnet-cni               0                   afdd62ed8d456       kindnet-fvfnt
	b7a989a14162f       7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82                                                             6 minutes ago            Running             etcd                      0                   24c54e4ec2015       etcd-addons-790770
	60863df732592       ddb38cac617cb18802e09e448db4b3aa70e9e469b02defa76e6de7192847a71c                                                             6 minutes ago            Running             kube-scheduler            0                   6e15d22c18fe7       kube-scheduler-addons-790770
	9d6052af2899c       265c2dedf28ab9b88c7910c1643e210ad62483867f2bab88f56919a6e49a0d19                                                             6 minutes ago            Running             kube-apiserver            0                   42d475a15f8b8       kube-apiserver-addons-790770
	2ef63a2080626       2933761aa7adae93679cdde1c0bf457bd4dc4b53f95fc066a4c50aa9c375ea13                                                             6 minutes ago            Running             kube-controller-manager   0                   533ed775423ab       kube-controller-manager-addons-790770
	
	
	==> coredns [9df93da9b4578b318ebd1c095a4b55308b6aa76560481ef26cb03cc28e6f6a2e] <==
	[INFO] 10.244.0.10:40940 - 33056 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.001848688s
	[INFO] 10.244.0.10:40940 - 30414 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000129247s
	[INFO] 10.244.0.10:40940 - 56277 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000099873s
	[INFO] 10.244.0.10:46501 - 7773 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000161625s
	[INFO] 10.244.0.10:46501 - 7544 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000084193s
	[INFO] 10.244.0.10:50155 - 50487 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000095935s
	[INFO] 10.244.0.10:50155 - 50924 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00008585s
	[INFO] 10.244.0.10:54302 - 25229 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000105305s
	[INFO] 10.244.0.10:54302 - 25040 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000084907s
	[INFO] 10.244.0.10:47717 - 21485 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001624466s
	[INFO] 10.244.0.10:47717 - 21682 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00284248s
	[INFO] 10.244.0.10:42068 - 16102 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000145772s
	[INFO] 10.244.0.10:42068 - 15657 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000101088s
	[INFO] 10.244.0.19:34797 - 61456 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000195127s
	[INFO] 10.244.0.19:47542 - 554 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000243087s
	[INFO] 10.244.0.19:36244 - 57737 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000158138s
	[INFO] 10.244.0.19:51217 - 16713 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000159935s
	[INFO] 10.244.0.19:32775 - 61882 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000091717s
	[INFO] 10.244.0.19:59232 - 61632 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000083611s
	[INFO] 10.244.0.19:45678 - 12577 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001956086s
	[INFO] 10.244.0.19:59603 - 21647 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001832163s
	[INFO] 10.244.0.19:58040 - 21926 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000668253s
	[INFO] 10.244.0.19:33948 - 30877 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.004047983s
	[INFO] 10.244.0.24:47877 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000228547s
	[INFO] 10.244.0.24:54292 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000169051s
	
	
	==> describe nodes <==
	Name:               addons-790770
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-790770
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a5089c94d5c3e26f81a121b7614c4f7f440f9c0
	                    minikube.k8s.io/name=addons-790770
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_27T13_59_30_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-790770
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Jan 2025 13:59:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-790770
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Jan 2025 14:06:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Jan 2025 14:04:35 +0000   Mon, 27 Jan 2025 13:59:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Jan 2025 14:04:35 +0000   Mon, 27 Jan 2025 13:59:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Jan 2025 14:04:35 +0000   Mon, 27 Jan 2025 13:59:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Jan 2025 14:04:35 +0000   Mon, 27 Jan 2025 14:00:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-790770
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 bb15a0915bab44469228c25de2010886
	  System UUID:                f9b95af3-43d3-4bde-ba57-64c9d02f07c0
	  Boot ID:                    b7abfdb7-2453-48c7-aae3-32d112d9514f
	  Kernel Version:             5.15.0-1075-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m43s
	  default                     hello-world-app-7d9564db4-v2dpt              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  ingress-nginx               ingress-nginx-controller-56d7c84fd4-tdtlm    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         6m29s
	  kube-system                 coredns-668d6bf9bc-wsktl                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     6m35s
	  kube-system                 etcd-addons-790770                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         6m40s
	  kube-system                 kindnet-fvfnt                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      6m36s
	  kube-system                 kube-apiserver-addons-790770                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m40s
	  kube-system                 kube-controller-manager-addons-790770        200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m41s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m30s
	  kube-system                 kube-proxy-5nnw4                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m36s
	  kube-system                 kube-scheduler-addons-790770                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m40s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             310Mi (3%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 6m29s                  kube-proxy       
	  Normal   Starting                 6m47s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m47s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  6m47s (x8 over 6m47s)  kubelet          Node addons-790770 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m47s (x8 over 6m47s)  kubelet          Node addons-790770 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m47s (x8 over 6m47s)  kubelet          Node addons-790770 status is now: NodeHasSufficientPID
	  Normal   Starting                 6m40s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m40s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  6m40s                  kubelet          Node addons-790770 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m40s                  kubelet          Node addons-790770 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m40s                  kubelet          Node addons-790770 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6m36s                  node-controller  Node addons-790770 event: Registered Node addons-790770 in Controller
	  Normal   NodeReady                5m51s                  kubelet          Node addons-790770 status is now: NodeReady
	
	
	==> dmesg <==
	[Jan27 12:50] hrtimer: interrupt took 50331376 ns
	
	
	==> etcd [b7a989a14162f7fa77960e45c1d2b7590386cb2f25cb0d69a60e1ef45caabc4f] <==
	{"level":"info","ts":"2025-01-27T13:59:36.275369Z","caller":"traceutil/trace.go:171","msg":"trace[1448257051] transaction","detail":"{read_only:false; response_revision:384; number_of_response:1; }","duration":"166.888452ms","start":"2025-01-27T13:59:36.108428Z","end":"2025-01-27T13:59:36.275316Z","steps":["trace[1448257051] 'process raft request'  (duration: 49.965541ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T13:59:36.294456Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"186.045941ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T13:59:36.294535Z","caller":"traceutil/trace.go:171","msg":"trace[1876422922] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:384; }","duration":"186.144944ms","start":"2025-01-27T13:59:36.108371Z","end":"2025-01-27T13:59:36.294516Z","steps":["trace[1876422922] 'agreement among raft nodes before linearized reading'  (duration: 50.111815ms)","trace[1876422922] 'range keys from in-memory index tree'  (duration: 135.908239ms)"],"step_count":2}
	{"level":"warn","ts":"2025-01-27T13:59:37.305795Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"132.307954ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128034877029090198 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-node-lease/default\" mod_revision:358 > success:<request_put:<key:\"/registry/serviceaccounts/kube-node-lease/default\" value_size:128 >> failure:<request_range:<key:\"/registry/serviceaccounts/kube-node-lease/default\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-01-27T13:59:37.306111Z","caller":"traceutil/trace.go:171","msg":"trace[822765356] linearizableReadLoop","detail":"{readStateIndex:403; appliedIndex:401; }","duration":"142.877316ms","start":"2025-01-27T13:59:37.163211Z","end":"2025-01-27T13:59:37.306088Z","steps":["trace[822765356] 'read index received'  (duration: 9.807471ms)","trace[822765356] 'applied index is now lower than readState.Index'  (duration: 133.069221ms)"],"step_count":2}
	{"level":"info","ts":"2025-01-27T13:59:37.307031Z","caller":"traceutil/trace.go:171","msg":"trace[938471514] transaction","detail":"{read_only:false; response_revision:390; number_of_response:1; }","duration":"148.047535ms","start":"2025-01-27T13:59:37.158971Z","end":"2025-01-27T13:59:37.307018Z","steps":["trace[938471514] 'process raft request'  (duration: 14.090237ms)","trace[938471514] 'compare'  (duration: 132.200532ms)"],"step_count":2}
	{"level":"info","ts":"2025-01-27T13:59:37.307325Z","caller":"traceutil/trace.go:171","msg":"trace[1277635561] transaction","detail":"{read_only:false; response_revision:391; number_of_response:1; }","duration":"144.293937ms","start":"2025-01-27T13:59:37.163022Z","end":"2025-01-27T13:59:37.307316Z","steps":["trace[1277635561] 'process raft request'  (duration: 142.949768ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T13:59:37.310278Z","caller":"traceutil/trace.go:171","msg":"trace[378880124] transaction","detail":"{read_only:false; response_revision:392; number_of_response:1; }","duration":"147.00305ms","start":"2025-01-27T13:59:37.163264Z","end":"2025-01-27T13:59:37.310267Z","steps":["trace[378880124] 'process raft request'  (duration: 142.767883ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T13:59:37.310449Z","caller":"traceutil/trace.go:171","msg":"trace[1710694533] transaction","detail":"{read_only:false; response_revision:393; number_of_response:1; }","duration":"147.108331ms","start":"2025-01-27T13:59:37.163333Z","end":"2025-01-27T13:59:37.310441Z","steps":["trace[1710694533] 'process raft request'  (duration: 142.724372ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T13:59:37.310624Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"147.399647ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T13:59:37.310692Z","caller":"traceutil/trace.go:171","msg":"trace[171770771] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:393; }","duration":"147.477957ms","start":"2025-01-27T13:59:37.163206Z","end":"2025-01-27T13:59:37.310684Z","steps":["trace[171770771] 'agreement among raft nodes before linearized reading'  (duration: 147.364742ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T13:59:37.310871Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"146.175741ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-5nnw4\" limit:1 ","response":"range_response_count:1 size:4833"}
	{"level":"info","ts":"2025-01-27T13:59:37.310937Z","caller":"traceutil/trace.go:171","msg":"trace[1418798155] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-5nnw4; range_end:; response_count:1; response_revision:393; }","duration":"146.244492ms","start":"2025-01-27T13:59:37.164686Z","end":"2025-01-27T13:59:37.310930Z","steps":["trace[1418798155] 'agreement among raft nodes before linearized reading'  (duration: 146.142255ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T13:59:37.565411Z","caller":"traceutil/trace.go:171","msg":"trace[1255344014] linearizableReadLoop","detail":"{readStateIndex:406; appliedIndex:405; }","duration":"111.873086ms","start":"2025-01-27T13:59:37.453522Z","end":"2025-01-27T13:59:37.565395Z","steps":["trace[1255344014] 'read index received'  (duration: 111.711411ms)","trace[1255344014] 'applied index is now lower than readState.Index'  (duration: 161.117µs)"],"step_count":2}
	{"level":"info","ts":"2025-01-27T13:59:37.605377Z","caller":"traceutil/trace.go:171","msg":"trace[1700456680] transaction","detail":"{read_only:false; response_revision:394; number_of_response:1; }","duration":"152.031731ms","start":"2025-01-27T13:59:37.453327Z","end":"2025-01-27T13:59:37.605359Z","steps":["trace[1700456680] 'process raft request'  (duration: 111.9522ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T13:59:37.606014Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"114.889057ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiextensions.k8s.io/customresourcedefinitions\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T13:59:37.606097Z","caller":"traceutil/trace.go:171","msg":"trace[1441788719] range","detail":"{range_begin:/registry/apiextensions.k8s.io/customresourcedefinitions; range_end:; response_count:0; response_revision:394; }","duration":"114.972216ms","start":"2025-01-27T13:59:37.491115Z","end":"2025-01-27T13:59:37.606087Z","steps":["trace[1441788719] 'agreement among raft nodes before linearized reading'  (duration: 114.859888ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T13:59:37.605609Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"152.071427ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/registry\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T13:59:37.625265Z","caller":"traceutil/trace.go:171","msg":"trace[632592248] range","detail":"{range_begin:/registry/deployments/kube-system/registry; range_end:; response_count:0; response_revision:394; }","duration":"171.728208ms","start":"2025-01-27T13:59:37.453515Z","end":"2025-01-27T13:59:37.625243Z","steps":["trace[632592248] 'agreement among raft nodes before linearized reading'  (duration: 151.895434ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T13:59:37.626155Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"134.8422ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-790770\" limit:1 ","response":"range_response_count:1 size:5815"}
	{"level":"info","ts":"2025-01-27T13:59:37.632666Z","caller":"traceutil/trace.go:171","msg":"trace[1158769669] range","detail":"{range_begin:/registry/minions/addons-790770; range_end:; response_count:1; response_revision:394; }","duration":"141.355176ms","start":"2025-01-27T13:59:37.491289Z","end":"2025-01-27T13:59:37.632645Z","steps":["trace[1158769669] 'agreement among raft nodes before linearized reading'  (duration: 134.786815ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T13:59:37.605655Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"152.247017ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/storageclasses/standard\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2025-01-27T13:59:37.605675Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"152.41121ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T13:59:37.665934Z","caller":"traceutil/trace.go:171","msg":"trace[1584106763] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/storage-provisioner; range_end:; response_count:0; response_revision:394; }","duration":"212.656062ms","start":"2025-01-27T13:59:37.453260Z","end":"2025-01-27T13:59:37.665916Z","steps":["trace[1584106763] 'agreement among raft nodes before linearized reading'  (duration: 151.760657ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T13:59:37.666085Z","caller":"traceutil/trace.go:171","msg":"trace[1175071860] range","detail":"{range_begin:/registry/storageclasses/standard; range_end:; response_count:0; response_revision:394; }","duration":"212.670281ms","start":"2025-01-27T13:59:37.453404Z","end":"2025-01-27T13:59:37.666074Z","steps":["trace[1175071860] 'agreement among raft nodes before linearized reading'  (duration: 152.233897ms)"],"step_count":1}
	
	
	==> kernel <==
	 14:06:10 up  3:48,  0 users,  load average: 0.14, 1.15, 2.13
	Linux addons-790770 5.15.0-1075-aws #82~20.04.1-Ubuntu SMP Thu Dec 19 05:23:06 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [c64cb973388bc0a2909e59b1f91e1db13cb6329bdef0d091f75a11329dd18cd2] <==
	I0127 14:04:07.725138       1 main.go:301] handling current node
	I0127 14:04:17.725027       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0127 14:04:17.725061       1 main.go:301] handling current node
	I0127 14:04:27.725745       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0127 14:04:27.725781       1 main.go:301] handling current node
	I0127 14:04:37.725438       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0127 14:04:37.725478       1 main.go:301] handling current node
	I0127 14:04:47.724742       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0127 14:04:47.724775       1 main.go:301] handling current node
	I0127 14:04:57.724915       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0127 14:04:57.724950       1 main.go:301] handling current node
	I0127 14:05:07.725133       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0127 14:05:07.725180       1 main.go:301] handling current node
	I0127 14:05:17.725037       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0127 14:05:17.725072       1 main.go:301] handling current node
	I0127 14:05:27.725032       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0127 14:05:27.725065       1 main.go:301] handling current node
	I0127 14:05:37.725608       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0127 14:05:37.725642       1 main.go:301] handling current node
	I0127 14:05:47.725070       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0127 14:05:47.725207       1 main.go:301] handling current node
	I0127 14:05:57.725041       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0127 14:05:57.725149       1 main.go:301] handling current node
	I0127 14:06:07.725726       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0127 14:06:07.725836       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9d6052af2899c662705fa6e3722cb09ef2534baea4800d3dd66727c13f733e94] <==
	E0127 14:02:36.579591       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:42670: use of closed network connection
	I0127 14:02:46.511118       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.110.81.149"}
	I0127 14:03:41.218960       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0127 14:03:42.351437       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0127 14:03:46.924284       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0127 14:03:47.221110       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.98.211.116"}
	I0127 14:03:49.455896       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0127 14:03:51.547090       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	E0127 14:04:10.499073       1 watch.go:278] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I0127 14:04:11.454118       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0127 14:04:11.454175       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0127 14:04:11.475113       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0127 14:04:11.475176       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0127 14:04:11.510484       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0127 14:04:11.510541       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0127 14:04:11.541083       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0127 14:04:11.541401       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0127 14:04:12.510870       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0127 14:04:12.541479       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0127 14:04:12.561313       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E0127 14:04:29.598087       1 authentication.go:74] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0127 14:04:29.608188       1 authentication.go:74] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0127 14:04:29.619084       1 authentication.go:74] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0127 14:04:44.620536       1 authentication.go:74] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0127 14:06:08.065193       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.106.139.53"}
	
	
	==> kube-controller-manager [2ef63a2080626bc93b29eb24e3014baa129149ae33e99dcc19af31dc2d4029f3] <==
	W0127 14:05:19.105544       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0127 14:05:19.106613       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotcontents"
	W0127 14:05:19.107584       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0127 14:05:19.107625       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0127 14:05:28.608318       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	W0127 14:05:35.534995       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0127 14:05:35.536301       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshots"
	W0127 14:05:35.538661       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0127 14:05:35.539490       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0127 14:05:35.691974       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/cloud-spanner-emulator-5d76cffbc" duration="4.619µs"
	W0127 14:05:56.510739       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0127 14:05:56.511851       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
	W0127 14:05:56.512827       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0127 14:05:56.512867       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0127 14:06:05.950252       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0127 14:06:05.951331       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotclasses"
	W0127 14:06:05.952374       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0127 14:06:05.952412       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0127 14:06:07.829291       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="35.042553ms"
	I0127 14:06:07.842716       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="13.299204ms"
	I0127 14:06:07.842877       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="38.359µs"
	W0127 14:06:10.104309       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0127 14:06:10.105523       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotcontents"
	W0127 14:06:10.106707       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0127 14:06:10.106748       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [37592d3cc643b7da490ad38caaf1d133c7058b03e17c4fc779114e2ea48339f7] <==
	I0127 13:59:39.048286       1 server_linux.go:66] "Using iptables proxy"
	I0127 13:59:39.741253       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0127 13:59:39.756529       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0127 13:59:40.255109       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0127 13:59:40.255165       1 server_linux.go:170] "Using iptables Proxier"
	I0127 13:59:40.283222       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0127 13:59:40.283657       1 server.go:497] "Version info" version="v1.32.1"
	I0127 13:59:40.285396       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 13:59:40.290755       1 config.go:199] "Starting service config controller"
	I0127 13:59:40.290853       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0127 13:59:40.290925       1 config.go:105] "Starting endpoint slice config controller"
	I0127 13:59:40.290958       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0127 13:59:40.291564       1 config.go:329] "Starting node config controller"
	I0127 13:59:40.292383       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0127 13:59:40.391317       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0127 13:59:40.391361       1 shared_informer.go:320] Caches are synced for service config
	I0127 13:59:40.394258       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [60863df73259277c6081f4e2e56568c43f6de16503cf7c73b579f8eed272aa01] <==
	W0127 13:59:26.881906       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0127 13:59:26.881928       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 13:59:26.881972       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0127 13:59:26.881989       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 13:59:26.882056       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0127 13:59:26.882082       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 13:59:26.882169       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0127 13:59:26.882187       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 13:59:26.882213       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0127 13:59:26.882229       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0127 13:59:26.882289       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0127 13:59:26.882309       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 13:59:26.882392       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0127 13:59:26.882414       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 13:59:26.884439       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0127 13:59:26.884473       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 13:59:27.691253       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0127 13:59:27.691373       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0127 13:59:27.712001       1 reflector.go:569] runtime/asm_arm64.s:1223: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0127 13:59:27.712049       1 reflector.go:166] "Unhandled Error" err="runtime/asm_arm64.s:1223: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0127 13:59:27.732473       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0127 13:59:27.732579       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0127 13:59:27.833716       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0127 13:59:27.833764       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0127 13:59:30.772725       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 27 14:05:31 addons-790770 kubelet[1517]: I0127 14:05:31.070057    1517 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd20b544-75f0-46ac-beee-0f2d1020bcb4" path="/var/lib/kubelet/pods/fd20b544-75f0-46ac-beee-0f2d1020bcb4/volumes"
	Jan 27 14:05:35 addons-790770 kubelet[1517]: E0127 14:05:35.637582    1517 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/e93322b8c9b46dc565539edf0201e603a0a74ee4bbe377b01d3c9cd9ca155114/diff" to get inode usage: stat /var/lib/containers/storage/overlay/e93322b8c9b46dc565539edf0201e603a0a74ee4bbe377b01d3c9cd9ca155114/diff: no such file or directory, extraDiskErr: <nil>
	Jan 27 14:05:36 addons-790770 kubelet[1517]: I0127 14:05:36.104479    1517 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fnzh2\" (UniqueName: \"kubernetes.io/projected/1e628989-3be3-4245-93ae-daeeb008f1b1-kube-api-access-fnzh2\") pod \"1e628989-3be3-4245-93ae-daeeb008f1b1\" (UID: \"1e628989-3be3-4245-93ae-daeeb008f1b1\") "
	Jan 27 14:05:36 addons-790770 kubelet[1517]: I0127 14:05:36.106500    1517 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e628989-3be3-4245-93ae-daeeb008f1b1-kube-api-access-fnzh2" (OuterVolumeSpecName: "kube-api-access-fnzh2") pod "1e628989-3be3-4245-93ae-daeeb008f1b1" (UID: "1e628989-3be3-4245-93ae-daeeb008f1b1"). InnerVolumeSpecName "kube-api-access-fnzh2". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Jan 27 14:05:36 addons-790770 kubelet[1517]: I0127 14:05:36.204825    1517 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fnzh2\" (UniqueName: \"kubernetes.io/projected/1e628989-3be3-4245-93ae-daeeb008f1b1-kube-api-access-fnzh2\") on node \"addons-790770\" DevicePath \"\""
	Jan 27 14:05:36 addons-790770 kubelet[1517]: I0127 14:05:36.418379    1517 scope.go:117] "RemoveContainer" containerID="1170f8dfbe9bf34a58a879489696b458ece6077f9c34f179ebe21aec5219c6a0"
	Jan 27 14:05:36 addons-790770 kubelet[1517]: I0127 14:05:36.439020    1517 scope.go:117] "RemoveContainer" containerID="1170f8dfbe9bf34a58a879489696b458ece6077f9c34f179ebe21aec5219c6a0"
	Jan 27 14:05:36 addons-790770 kubelet[1517]: E0127 14:05:36.439530    1517 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1170f8dfbe9bf34a58a879489696b458ece6077f9c34f179ebe21aec5219c6a0\": container with ID starting with 1170f8dfbe9bf34a58a879489696b458ece6077f9c34f179ebe21aec5219c6a0 not found: ID does not exist" containerID="1170f8dfbe9bf34a58a879489696b458ece6077f9c34f179ebe21aec5219c6a0"
	Jan 27 14:05:36 addons-790770 kubelet[1517]: I0127 14:05:36.439567    1517 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1170f8dfbe9bf34a58a879489696b458ece6077f9c34f179ebe21aec5219c6a0"} err="failed to get container status \"1170f8dfbe9bf34a58a879489696b458ece6077f9c34f179ebe21aec5219c6a0\": rpc error: code = NotFound desc = could not find container \"1170f8dfbe9bf34a58a879489696b458ece6077f9c34f179ebe21aec5219c6a0\": container with ID starting with 1170f8dfbe9bf34a58a879489696b458ece6077f9c34f179ebe21aec5219c6a0 not found: ID does not exist"
	Jan 27 14:05:37 addons-790770 kubelet[1517]: I0127 14:05:37.069871    1517 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e628989-3be3-4245-93ae-daeeb008f1b1" path="/var/lib/kubelet/pods/1e628989-3be3-4245-93ae-daeeb008f1b1/volumes"
	Jan 27 14:05:39 addons-790770 kubelet[1517]: E0127 14:05:39.264249    1517 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737986739264001004,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595443,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 14:05:39 addons-790770 kubelet[1517]: E0127 14:05:39.264284    1517 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737986739264001004,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595443,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 14:05:49 addons-790770 kubelet[1517]: E0127 14:05:49.266615    1517 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737986749266375825,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595443,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 14:05:49 addons-790770 kubelet[1517]: E0127 14:05:49.266652    1517 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737986749266375825,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595443,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 14:05:59 addons-790770 kubelet[1517]: E0127 14:05:59.269531    1517 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737986759269303467,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595443,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 14:05:59 addons-790770 kubelet[1517]: E0127 14:05:59.269572    1517 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737986759269303467,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595443,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 14:06:07 addons-790770 kubelet[1517]: I0127 14:06:07.818835    1517 memory_manager.go:355] "RemoveStaleState removing state" podUID="571231a4-f000-48c2-87e2-4fecf0d78aca" containerName="local-path-provisioner"
	Jan 27 14:06:07 addons-790770 kubelet[1517]: I0127 14:06:07.818880    1517 memory_manager.go:355] "RemoveStaleState removing state" podUID="1e628989-3be3-4245-93ae-daeeb008f1b1" containerName="cloud-spanner-emulator"
	Jan 27 14:06:07 addons-790770 kubelet[1517]: I0127 14:06:07.818889    1517 memory_manager.go:355] "RemoveStaleState removing state" podUID="fd20b544-75f0-46ac-beee-0f2d1020bcb4" containerName="nvidia-device-plugin-ctr"
	Jan 27 14:06:07 addons-790770 kubelet[1517]: I0127 14:06:07.818897    1517 memory_manager.go:355] "RemoveStaleState removing state" podUID="02a8319c-6c2b-4b18-90a0-5cca1fb98d07" containerName="helper-pod"
	Jan 27 14:06:07 addons-790770 kubelet[1517]: I0127 14:06:07.818903    1517 memory_manager.go:355] "RemoveStaleState removing state" podUID="b4b4da69-0d36-464c-b4f4-25c2507f2e9c" containerName="yakd"
	Jan 27 14:06:08 addons-790770 kubelet[1517]: I0127 14:06:07.999961    1517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dhdz\" (UniqueName: \"kubernetes.io/projected/17fc014f-90ec-4a66-a6d6-4e958f261728-kube-api-access-6dhdz\") pod \"hello-world-app-7d9564db4-v2dpt\" (UID: \"17fc014f-90ec-4a66-a6d6-4e958f261728\") " pod="default/hello-world-app-7d9564db4-v2dpt"
	Jan 27 14:06:08 addons-790770 kubelet[1517]: W0127 14:06:08.182307    1517 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/b4ddec1f821217327b445106d014b6e3fc930cf0c66ab6baa1e14f85dd1c1ce6/crio-f39e563f6c86f9a10f5e5b7662c56975366b6fd32e838a447d7a157a7080a916 WatchSource:0}: Error finding container f39e563f6c86f9a10f5e5b7662c56975366b6fd32e838a447d7a157a7080a916: Status 404 returned error can't find the container with id f39e563f6c86f9a10f5e5b7662c56975366b6fd32e838a447d7a157a7080a916
	Jan 27 14:06:09 addons-790770 kubelet[1517]: E0127 14:06:09.272058    1517 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737986769271788085,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595443,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 14:06:09 addons-790770 kubelet[1517]: E0127 14:06:09.272094    1517 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737986769271788085,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595443,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [7269561d7328a72d2cd43ec1a24818f9e1b59fd94829b9d7d11cc53df5e34b91] <==
	I0127 14:00:19.239801       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0127 14:00:19.264082       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0127 14:00:19.264203       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0127 14:00:19.279022       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0127 14:00:19.279377       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d6c1a66c-c3b0-454f-8a96-104858a519eb", APIVersion:"v1", ResourceVersion:"947", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-790770_b9e9a068-db3d-4b22-8573-055c127c6cf7 became leader
	I0127 14:00:19.279469       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-790770_b9e9a068-db3d-4b22-8573-055c127c6cf7!
	I0127 14:00:19.380268       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-790770_b9e9a068-db3d-4b22-8573-055c127c6cf7!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-790770 -n addons-790770
helpers_test.go:261: (dbg) Run:  kubectl --context addons-790770 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-fwgb5 ingress-nginx-admission-patch-l5mt7
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-790770 describe pod ingress-nginx-admission-create-fwgb5 ingress-nginx-admission-patch-l5mt7
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-790770 describe pod ingress-nginx-admission-create-fwgb5 ingress-nginx-admission-patch-l5mt7: exit status 1 (82.493878ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-fwgb5" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-l5mt7" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-790770 describe pod ingress-nginx-admission-create-fwgb5 ingress-nginx-admission-patch-l5mt7: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-790770 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-790770 addons disable ingress-dns --alsologtostderr -v=1: (1.336101518s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-790770 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-790770 addons disable ingress --alsologtostderr -v=1: (7.809458698s)
--- FAIL: TestAddons/parallel/Ingress (153.76s)

                                                
                                    

Test pass (298/330)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 6.9
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.09
9 TestDownloadOnly/v1.20.0/DeleteAll 0.22
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.32.1/json-events 5.34
13 TestDownloadOnly/v1.32.1/preload-exists 0
17 TestDownloadOnly/v1.32.1/LogsDuration 0.1
18 TestDownloadOnly/v1.32.1/DeleteAll 0.23
19 TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds 0.15
21 TestBinaryMirror 0.62
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 225.63
31 TestAddons/serial/GCPAuth/Namespaces 0.21
32 TestAddons/serial/GCPAuth/FakeCredentials 10.92
35 TestAddons/parallel/Registry 36.27
37 TestAddons/parallel/InspektorGadget 12.23
38 TestAddons/parallel/MetricsServer 5.9
40 TestAddons/parallel/CSI 56.76
41 TestAddons/parallel/Headlamp 42.9
42 TestAddons/parallel/CloudSpanner 5.59
43 TestAddons/parallel/LocalPath 53.34
44 TestAddons/parallel/NvidiaDevicePlugin 6.53
45 TestAddons/parallel/Yakd 11.75
47 TestAddons/StoppedEnableDisable 12.19
48 TestCertOptions 33.52
49 TestCertExpiration 248.9
51 TestForceSystemdFlag 38.72
52 TestForceSystemdEnv 46.05
58 TestErrorSpam/setup 30.27
59 TestErrorSpam/start 0.81
60 TestErrorSpam/status 1.07
61 TestErrorSpam/pause 1.77
62 TestErrorSpam/unpause 1.87
63 TestErrorSpam/stop 1.46
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 76.52
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 27.35
70 TestFunctional/serial/KubeContext 0.07
71 TestFunctional/serial/KubectlGetPods 0.1
74 TestFunctional/serial/CacheCmd/cache/add_remote 4.43
75 TestFunctional/serial/CacheCmd/cache/add_local 1.43
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
79 TestFunctional/serial/CacheCmd/cache/cache_reload 2.19
80 TestFunctional/serial/CacheCmd/cache/delete 0.13
81 TestFunctional/serial/MinikubeKubectlCmd 0.15
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
83 TestFunctional/serial/ExtraConfig 41.57
84 TestFunctional/serial/ComponentHealth 0.11
85 TestFunctional/serial/LogsCmd 1.77
86 TestFunctional/serial/LogsFileCmd 1.76
87 TestFunctional/serial/InvalidService 4.31
89 TestFunctional/parallel/ConfigCmd 0.53
90 TestFunctional/parallel/DashboardCmd 11.07
91 TestFunctional/parallel/DryRun 0.6
92 TestFunctional/parallel/InternationalLanguage 0.34
93 TestFunctional/parallel/StatusCmd 1.01
97 TestFunctional/parallel/ServiceCmdConnect 8.63
98 TestFunctional/parallel/AddonsCmd 0.15
99 TestFunctional/parallel/PersistentVolumeClaim 27.78
101 TestFunctional/parallel/SSHCmd 0.53
102 TestFunctional/parallel/CpCmd 2.05
104 TestFunctional/parallel/FileSync 0.35
105 TestFunctional/parallel/CertSync 2.13
109 TestFunctional/parallel/NodeLabels 0.09
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.69
113 TestFunctional/parallel/License 0.34
114 TestFunctional/parallel/Version/short 0.07
115 TestFunctional/parallel/Version/components 1.32
116 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
117 TestFunctional/parallel/ImageCommands/ImageListTable 0.32
118 TestFunctional/parallel/ImageCommands/ImageListJson 0.27
119 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
120 TestFunctional/parallel/ImageCommands/ImageBuild 6.03
121 TestFunctional/parallel/ImageCommands/Setup 0.78
122 TestFunctional/parallel/UpdateContextCmd/no_changes 0.23
123 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.23
124 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.24
125 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.69
126 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.1
127 TestFunctional/parallel/ServiceCmd/DeployApp 10.25
128 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.48
129 TestFunctional/parallel/ImageCommands/ImageSaveToFile 2.25
130 TestFunctional/parallel/ImageCommands/ImageRemove 0.57
131 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.84
132 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.6
134 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.49
135 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
137 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.34
138 TestFunctional/parallel/ServiceCmd/List 0.36
139 TestFunctional/parallel/ServiceCmd/JSONOutput 0.36
140 TestFunctional/parallel/ServiceCmd/HTTPS 0.37
141 TestFunctional/parallel/ServiceCmd/Format 0.39
142 TestFunctional/parallel/ServiceCmd/URL 0.38
143 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
144 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
148 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
149 TestFunctional/parallel/ProfileCmd/profile_not_create 0.45
150 TestFunctional/parallel/ProfileCmd/profile_list 0.41
151 TestFunctional/parallel/ProfileCmd/profile_json_output 0.43
152 TestFunctional/parallel/MountCmd/any-port 7.82
153 TestFunctional/parallel/MountCmd/specific-port 1.94
154 TestFunctional/parallel/MountCmd/VerifyCleanup 2.08
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
161 TestMultiControlPlane/serial/StartCluster 175.27
162 TestMultiControlPlane/serial/DeployApp 9.56
163 TestMultiControlPlane/serial/PingHostFromPods 1.71
164 TestMultiControlPlane/serial/AddWorkerNode 33.14
165 TestMultiControlPlane/serial/NodeLabels 0.13
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 1
167 TestMultiControlPlane/serial/CopyFile 19.36
168 TestMultiControlPlane/serial/StopSecondaryNode 12.71
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.79
170 TestMultiControlPlane/serial/RestartSecondaryNode 24.96
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.34
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 171.3
173 TestMultiControlPlane/serial/DeleteSecondaryNode 12.61
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.73
175 TestMultiControlPlane/serial/StopCluster 35.78
176 TestMultiControlPlane/serial/RestartCluster 94.69
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.71
178 TestMultiControlPlane/serial/AddSecondaryNode 72.32
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1
183 TestJSONOutput/start/Command 81.52
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.77
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.69
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 5.86
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.25
208 TestKicCustomNetwork/create_custom_network 39.14
209 TestKicCustomNetwork/use_default_bridge_network 35.86
210 TestKicExistingNetwork 35.73
211 TestKicCustomSubnet 34.95
212 TestKicStaticIP 34.6
213 TestMainNoArgs 0.07
214 TestMinikubeProfile 66.65
217 TestMountStart/serial/StartWithMountFirst 6.89
218 TestMountStart/serial/VerifyMountFirst 0.26
219 TestMountStart/serial/StartWithMountSecond 6.63
220 TestMountStart/serial/VerifyMountSecond 0.26
221 TestMountStart/serial/DeleteFirst 1.64
222 TestMountStart/serial/VerifyMountPostDelete 0.28
223 TestMountStart/serial/Stop 1.21
224 TestMountStart/serial/RestartStopped 7.62
225 TestMountStart/serial/VerifyMountPostStop 0.26
228 TestMultiNode/serial/FreshStart2Nodes 103.37
229 TestMultiNode/serial/DeployApp2Nodes 6.65
230 TestMultiNode/serial/PingHostFrom2Pods 1.02
231 TestMultiNode/serial/AddNode 27.71
232 TestMultiNode/serial/MultiNodeLabels 0.09
233 TestMultiNode/serial/ProfileList 0.74
234 TestMultiNode/serial/CopyFile 10.09
235 TestMultiNode/serial/StopNode 2.23
236 TestMultiNode/serial/StartAfterStop 9.98
237 TestMultiNode/serial/RestartKeepsNodes 88.78
238 TestMultiNode/serial/DeleteNode 5.3
239 TestMultiNode/serial/StopMultiNode 23.83
240 TestMultiNode/serial/RestartMultiNode 55.08
241 TestMultiNode/serial/ValidateNameConflict 31.04
246 TestPreload 130.89
248 TestScheduledStopUnix 105.66
251 TestInsufficientStorage 10.25
252 TestRunningBinaryUpgrade 65.49
254 TestKubernetesUpgrade 385.89
255 TestMissingContainerUpgrade 167.94
257 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
258 TestNoKubernetes/serial/StartWithK8s 37.96
259 TestNoKubernetes/serial/StartWithStopK8s 29.98
260 TestNoKubernetes/serial/Start 10.17
261 TestNoKubernetes/serial/VerifyK8sNotRunning 0.46
262 TestNoKubernetes/serial/ProfileList 4.46
263 TestNoKubernetes/serial/Stop 1.25
264 TestNoKubernetes/serial/StartNoArgs 7.1
265 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.26
266 TestStoppedBinaryUpgrade/Setup 0.8
267 TestStoppedBinaryUpgrade/Upgrade 72.44
268 TestStoppedBinaryUpgrade/MinikubeLogs 2.3
277 TestPause/serial/Start 81.49
278 TestPause/serial/SecondStartNoReconfiguration 29.39
279 TestPause/serial/Pause 0.86
280 TestPause/serial/VerifyStatus 0.42
281 TestPause/serial/Unpause 0.84
282 TestPause/serial/PauseAgain 0.85
283 TestPause/serial/DeletePaused 2.85
284 TestPause/serial/VerifyDeletedResources 0.38
292 TestNetworkPlugins/group/false 4.82
297 TestStartStop/group/old-k8s-version/serial/FirstStart 180.61
299 TestStartStop/group/no-preload/serial/FirstStart 64.61
300 TestStartStop/group/old-k8s-version/serial/DeployApp 11.75
301 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.82
302 TestStartStop/group/old-k8s-version/serial/Stop 12.34
303 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.33
304 TestStartStop/group/old-k8s-version/serial/SecondStart 374.67
305 TestStartStop/group/no-preload/serial/DeployApp 10.4
306 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.22
307 TestStartStop/group/no-preload/serial/Stop 12.36
308 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
309 TestStartStop/group/no-preload/serial/SecondStart 300.05
310 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
311 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.12
312 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
313 TestStartStop/group/no-preload/serial/Pause 3.13
315 TestStartStop/group/embed-certs/serial/FirstStart 59.77
316 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
317 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.14
318 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.27
319 TestStartStop/group/old-k8s-version/serial/Pause 3.81
321 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 81.18
322 TestStartStop/group/embed-certs/serial/DeployApp 11.36
323 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.13
324 TestStartStop/group/embed-certs/serial/Stop 11.96
325 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
326 TestStartStop/group/embed-certs/serial/SecondStart 266.61
327 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.39
328 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.44
329 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.04
330 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.24
331 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 281.97
332 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
333 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.12
334 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.28
335 TestStartStop/group/embed-certs/serial/Pause 3.15
337 TestStartStop/group/newest-cni/serial/FirstStart 38.6
338 TestStartStop/group/newest-cni/serial/DeployApp 0
339 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.09
340 TestStartStop/group/newest-cni/serial/Stop 1.23
341 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
342 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
343 TestStartStop/group/newest-cni/serial/SecondStart 20.2
344 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.14
345 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.35
346 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.66
347 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
348 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
349 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.44
350 TestNetworkPlugins/group/auto/Start 89.8
351 TestStartStop/group/newest-cni/serial/Pause 4.39
352 TestNetworkPlugins/group/kindnet/Start 85.87
353 TestNetworkPlugins/group/auto/KubeletFlags 0.31
354 TestNetworkPlugins/group/auto/NetCatPod 11.29
355 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
356 TestNetworkPlugins/group/kindnet/KubeletFlags 0.31
357 TestNetworkPlugins/group/kindnet/NetCatPod 10.29
358 TestNetworkPlugins/group/auto/DNS 0.2
359 TestNetworkPlugins/group/auto/Localhost 0.15
360 TestNetworkPlugins/group/auto/HairPin 0.16
361 TestNetworkPlugins/group/kindnet/DNS 0.23
362 TestNetworkPlugins/group/kindnet/Localhost 0.27
363 TestNetworkPlugins/group/kindnet/HairPin 0.19
364 TestNetworkPlugins/group/calico/Start 71.37
365 TestNetworkPlugins/group/custom-flannel/Start 70.3
366 TestNetworkPlugins/group/calico/ControllerPod 6.01
367 TestNetworkPlugins/group/calico/KubeletFlags 0.3
368 TestNetworkPlugins/group/calico/NetCatPod 12.3
369 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.37
370 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.34
371 TestNetworkPlugins/group/calico/DNS 0.2
372 TestNetworkPlugins/group/calico/Localhost 0.17
373 TestNetworkPlugins/group/calico/HairPin 0.16
374 TestNetworkPlugins/group/custom-flannel/DNS 0.21
375 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
376 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
377 TestNetworkPlugins/group/enable-default-cni/Start 77.23
378 TestNetworkPlugins/group/flannel/Start 66.09
379 TestNetworkPlugins/group/flannel/ControllerPod 6.01
380 TestNetworkPlugins/group/flannel/KubeletFlags 0.3
381 TestNetworkPlugins/group/flannel/NetCatPod 15.27
382 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.32
383 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.29
384 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
385 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
386 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
387 TestNetworkPlugins/group/flannel/DNS 0.19
388 TestNetworkPlugins/group/flannel/Localhost 0.17
389 TestNetworkPlugins/group/flannel/HairPin 0.17
390 TestNetworkPlugins/group/bridge/Start 42.93
391 TestNetworkPlugins/group/bridge/KubeletFlags 0.31
392 TestNetworkPlugins/group/bridge/NetCatPod 10.27
393 TestNetworkPlugins/group/bridge/DNS 0.17
394 TestNetworkPlugins/group/bridge/Localhost 0.14
395 TestNetworkPlugins/group/bridge/HairPin 0.15
x
+
TestDownloadOnly/v1.20.0/json-events (6.9s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-957055 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-957055 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (6.895668568s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (6.90s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0127 13:58:32.035037 1183449 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0127 13:58:32.035143 1183449 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20325-1178062/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-957055
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-957055: exit status 85 (91.274627ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-957055 | jenkins | v1.35.0 | 27 Jan 25 13:58 UTC |          |
	|         | -p download-only-957055        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 13:58:25
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.23.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 13:58:25.186558 1183455 out.go:345] Setting OutFile to fd 1 ...
	I0127 13:58:25.186767 1183455 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:58:25.186794 1183455 out.go:358] Setting ErrFile to fd 2...
	I0127 13:58:25.186813 1183455 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:58:25.187108 1183455 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20325-1178062/.minikube/bin
	W0127 13:58:25.187289 1183455 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20325-1178062/.minikube/config/config.json: open /home/jenkins/minikube-integration/20325-1178062/.minikube/config/config.json: no such file or directory
	I0127 13:58:25.187843 1183455 out.go:352] Setting JSON to true
	I0127 13:58:25.189312 1183455 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":13257,"bootTime":1737973049,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0127 13:58:25.189412 1183455 start.go:139] virtualization:  
	I0127 13:58:25.193827 1183455 out.go:97] [download-only-957055] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0127 13:58:25.194118 1183455 notify.go:220] Checking for updates...
	W0127 13:58:25.194333 1183455 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20325-1178062/.minikube/cache/preloaded-tarball: no such file or directory
	I0127 13:58:25.197042 1183455 out.go:169] MINIKUBE_LOCATION=20325
	I0127 13:58:25.200015 1183455 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 13:58:25.202945 1183455 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20325-1178062/kubeconfig
	I0127 13:58:25.205850 1183455 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20325-1178062/.minikube
	I0127 13:58:25.208654 1183455 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0127 13:58:25.214321 1183455 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0127 13:58:25.214583 1183455 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 13:58:25.242955 1183455 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0127 13:58:25.243060 1183455 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 13:58:25.299861 1183455 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:54 SystemTime:2025-01-27 13:58:25.290309194 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0127 13:58:25.299974 1183455 docker.go:318] overlay module found
	I0127 13:58:25.303054 1183455 out.go:97] Using the docker driver based on user configuration
	I0127 13:58:25.303112 1183455 start.go:297] selected driver: docker
	I0127 13:58:25.303124 1183455 start.go:901] validating driver "docker" against <nil>
	I0127 13:58:25.303235 1183455 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 13:58:25.370738 1183455 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:54 SystemTime:2025-01-27 13:58:25.362413688 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0127 13:58:25.370944 1183455 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 13:58:25.371225 1183455 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0127 13:58:25.371380 1183455 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0127 13:58:25.374513 1183455 out.go:169] Using Docker driver with root privileges
	I0127 13:58:25.377313 1183455 cni.go:84] Creating CNI manager for ""
	I0127 13:58:25.377376 1183455 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0127 13:58:25.377391 1183455 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0127 13:58:25.377475 1183455 start.go:340] cluster config:
	{Name:download-only-957055 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-957055 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 13:58:25.380448 1183455 out.go:97] Starting "download-only-957055" primary control-plane node in "download-only-957055" cluster
	I0127 13:58:25.380480 1183455 cache.go:121] Beginning downloading kic base image for docker with crio
	I0127 13:58:25.383244 1183455 out.go:97] Pulling base image v0.0.46 ...
	I0127 13:58:25.383277 1183455 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0127 13:58:25.383439 1183455 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I0127 13:58:25.399547 1183455 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 to local cache
	I0127 13:58:25.399744 1183455 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local cache directory
	I0127 13:58:25.399870 1183455 image.go:150] Writing gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 to local cache
	I0127 13:58:25.444800 1183455 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I0127 13:58:25.444838 1183455 cache.go:56] Caching tarball of preloaded images
	I0127 13:58:25.448215 1183455 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0127 13:58:25.451807 1183455 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0127 13:58:25.451842 1183455 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I0127 13:58:25.531219 1183455 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:59cd2ef07b53f039bfd1761b921f2a02 -> /home/jenkins/minikube-integration/20325-1178062/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-957055 host does not exist
	  To start a cluster, run: "minikube start -p download-only-957055"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-957055
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/json-events (5.34s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-017417 --force --alsologtostderr --kubernetes-version=v1.32.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-017417 --force --alsologtostderr --kubernetes-version=v1.32.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.335808263s)
--- PASS: TestDownloadOnly/v1.32.1/json-events (5.34s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/preload-exists
I0127 13:58:37.818175 1183449 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
I0127 13:58:37.818215 1183449 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20325-1178062/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-017417
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-017417: exit status 85 (95.291168ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-957055 | jenkins | v1.35.0 | 27 Jan 25 13:58 UTC |                     |
	|         | -p download-only-957055        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.35.0 | 27 Jan 25 13:58 UTC | 27 Jan 25 13:58 UTC |
	| delete  | -p download-only-957055        | download-only-957055 | jenkins | v1.35.0 | 27 Jan 25 13:58 UTC | 27 Jan 25 13:58 UTC |
	| start   | -o=json --download-only        | download-only-017417 | jenkins | v1.35.0 | 27 Jan 25 13:58 UTC |                     |
	|         | -p download-only-017417        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 13:58:32
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.23.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 13:58:32.530357 1183652 out.go:345] Setting OutFile to fd 1 ...
	I0127 13:58:32.530544 1183652 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:58:32.530553 1183652 out.go:358] Setting ErrFile to fd 2...
	I0127 13:58:32.530559 1183652 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:58:32.530785 1183652 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20325-1178062/.minikube/bin
	I0127 13:58:32.531179 1183652 out.go:352] Setting JSON to true
	I0127 13:58:32.532087 1183652 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":13264,"bootTime":1737973049,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0127 13:58:32.532156 1183652 start.go:139] virtualization:  
	I0127 13:58:32.535678 1183652 out.go:97] [download-only-017417] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0127 13:58:32.535887 1183652 notify.go:220] Checking for updates...
	I0127 13:58:32.538837 1183652 out.go:169] MINIKUBE_LOCATION=20325
	I0127 13:58:32.541703 1183652 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 13:58:32.544542 1183652 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20325-1178062/kubeconfig
	I0127 13:58:32.547341 1183652 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20325-1178062/.minikube
	I0127 13:58:32.550242 1183652 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0127 13:58:32.556075 1183652 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0127 13:58:32.556364 1183652 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 13:58:32.578227 1183652 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0127 13:58:32.578344 1183652 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 13:58:32.630066 1183652 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-01-27 13:58:32.621232534 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0127 13:58:32.630183 1183652 docker.go:318] overlay module found
	I0127 13:58:32.633173 1183652 out.go:97] Using the docker driver based on user configuration
	I0127 13:58:32.633206 1183652 start.go:297] selected driver: docker
	I0127 13:58:32.633214 1183652 start.go:901] validating driver "docker" against <nil>
	I0127 13:58:32.633313 1183652 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 13:58:32.686878 1183652 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-01-27 13:58:32.678278002 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0127 13:58:32.687083 1183652 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 13:58:32.687351 1183652 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0127 13:58:32.687498 1183652 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0127 13:58:32.690934 1183652 out.go:169] Using Docker driver with root privileges
	I0127 13:58:32.693843 1183652 cni.go:84] Creating CNI manager for ""
	I0127 13:58:32.693912 1183652 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0127 13:58:32.693923 1183652 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0127 13:58:32.694015 1183652 start.go:340] cluster config:
	{Name:download-only-017417 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:download-only-017417 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 13:58:32.697108 1183652 out.go:97] Starting "download-only-017417" primary control-plane node in "download-only-017417" cluster
	I0127 13:58:32.697132 1183652 cache.go:121] Beginning downloading kic base image for docker with crio
	I0127 13:58:32.700094 1183652 out.go:97] Pulling base image v0.0.46 ...
	I0127 13:58:32.700124 1183652 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 13:58:32.700299 1183652 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I0127 13:58:32.715790 1183652 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 to local cache
	I0127 13:58:32.715912 1183652 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local cache directory
	I0127 13:58:32.715937 1183652 image.go:68] Found gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local cache directory, skipping pull
	I0127 13:58:32.715943 1183652 image.go:137] gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 exists in cache, skipping pull
	I0127 13:58:32.715953 1183652 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 as a tarball
	I0127 13:58:32.772358 1183652 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.1/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-arm64.tar.lz4
	I0127 13:58:32.772394 1183652 cache.go:56] Caching tarball of preloaded images
	I0127 13:58:32.773217 1183652 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 13:58:32.776238 1183652 out.go:97] Downloading Kubernetes v1.32.1 preload ...
	I0127 13:58:32.776281 1183652 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-arm64.tar.lz4 ...
	I0127 13:58:32.845083 1183652 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.1/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-arm64.tar.lz4?checksum=md5:2975fc7b8b3f798b17cd470734f6f7e1 -> /home/jenkins/minikube-integration/20325-1178062/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-arm64.tar.lz4
	I0127 13:58:36.282800 1183652 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-arm64.tar.lz4 ...
	I0127 13:58:36.282953 1183652 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20325-1178062/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-arm64.tar.lz4 ...
	I0127 13:58:37.162616 1183652 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0127 13:58:37.162997 1183652 profile.go:143] Saving config to /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/download-only-017417/config.json ...
	I0127 13:58:37.163032 1183652 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/download-only-017417/config.json: {Name:mk2638e6ab98b668145d3bcc6df1771b119274d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:58:37.163224 1183652 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 13:58:37.163986 1183652 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/20325-1178062/.minikube/cache/linux/arm64/v1.32.1/kubectl
	
	
	* The control-plane node download-only-017417 host does not exist
	  To start a cluster, run: "minikube start -p download-only-017417"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.1/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.32.1/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-017417
--- PASS: TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.62s)

                                                
                                                
=== RUN   TestBinaryMirror
I0127 13:58:39.184025 1183449 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-617848 --alsologtostderr --binary-mirror http://127.0.0.1:43587 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-617848" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-617848
--- PASS: TestBinaryMirror (0.62s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-790770
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-790770: exit status 85 (75.084372ms)

                                                
                                                
-- stdout --
	* Profile "addons-790770" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-790770"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-790770
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-790770: exit status 85 (77.796751ms)

                                                
                                                
-- stdout --
	* Profile "addons-790770" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-790770"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (225.63s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-arm64 start -p addons-790770 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-arm64 start -p addons-790770 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m45.6294819s)
--- PASS: TestAddons/Setup (225.63s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.21s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-790770 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-790770 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.21s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.92s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-790770 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-790770 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5fece5b4-bbe4-4b39-bb37-f4ee9f4dd697] Pending
helpers_test.go:344: "busybox" [5fece5b4-bbe4-4b39-bb37-f4ee9f4dd697] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5fece5b4-bbe4-4b39-bb37-f4ee9f4dd697] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.004268137s
addons_test.go:633: (dbg) Run:  kubectl --context addons-790770 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-790770 describe sa gcp-auth-test
addons_test.go:659: (dbg) Run:  kubectl --context addons-790770 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:683: (dbg) Run:  kubectl --context addons-790770 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.92s)

                                                
                                    
x
+
TestAddons/parallel/Registry (36.27s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 8.310252ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c88467877-7mfnl" [8373b1bf-7553-43a4-bc9f-a9a0f3320699] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003567431s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-s4zc7" [43da24c2-fd2a-4d7f-b8ec-be818eb4ec64] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004545091s
addons_test.go:331: (dbg) Run:  kubectl --context addons-790770 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-790770 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-790770 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (24.356424197s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-arm64 -p addons-790770 ip
2025/01/27 14:03:21 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-790770 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (36.27s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.23s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-gj4zm" [87ba8f08-26ff-4a73-946c-c45150856293] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004293869s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-790770 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-790770 addons disable inspektor-gadget --alsologtostderr -v=1: (6.225462869s)
--- PASS: TestAddons/parallel/InspektorGadget (12.23s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.9s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 8.081928ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-xwjq9" [86fa6622-b0d8-40d1-a078-fb3cd93b374c] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003538643s
addons_test.go:402: (dbg) Run:  kubectl --context addons-790770 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-790770 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.90s)

                                                
                                    
x
+
TestAddons/parallel/CSI (56.76s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0127 14:03:21.842128 1183449 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0127 14:03:21.850589 1183449 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0127 14:03:21.850623 1183449 kapi.go:107] duration metric: took 8.511796ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 8.523513ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-790770 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-790770 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-790770 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-790770 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-790770 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-790770 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-790770 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-790770 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-790770 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-790770 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-790770 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-790770 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-790770 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-790770 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-790770 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-790770 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-790770 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-790770 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-790770 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [57a6cc52-2a74-4c7a-ba9d-6296ae2e5fd1] Pending
helpers_test.go:344: "task-pv-pod" [57a6cc52-2a74-4c7a-ba9d-6296ae2e5fd1] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [57a6cc52-2a74-4c7a-ba9d-6296ae2e5fd1] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.006188471s
addons_test.go:511: (dbg) Run:  kubectl --context addons-790770 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-790770 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-790770 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-790770 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-790770 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-790770 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-790770 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-790770 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-790770 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-790770 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-790770 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-790770 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-790770 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-790770 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-790770 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-790770 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-790770 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-790770 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [00131492-0cc8-4a56-8110-5d0c0435b766] Pending
helpers_test.go:344: "task-pv-pod-restore" [00131492-0cc8-4a56-8110-5d0c0435b766] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [00131492-0cc8-4a56-8110-5d0c0435b766] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.00356546s
addons_test.go:553: (dbg) Run:  kubectl --context addons-790770 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-790770 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-790770 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-790770 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-790770 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-790770 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.892192598s)
--- PASS: TestAddons/parallel/CSI (56.76s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (42.9s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-790770 --alsologtostderr -v=1
addons_test.go:747: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-790770 --alsologtostderr -v=1: (1.074230565s)
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-69d78d796f-m6l6h" [2d1574f7-251d-418b-be91-40f600445b19] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-69d78d796f-m6l6h" [2d1574f7-251d-418b-be91-40f600445b19] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 36.004854092s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-790770 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-790770 addons disable headlamp --alsologtostderr -v=1: (5.81565265s)
--- PASS: TestAddons/parallel/Headlamp (42.90s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.59s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5d76cffbc-gjk99" [1e628989-3be3-4245-93ae-daeeb008f1b1] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004266952s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-790770 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.59s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.34s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-790770 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-790770 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-790770 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-790770 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-790770 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-790770 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-790770 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-790770 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [88c62789-8350-4c4c-8809-1851ca9c6475] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [88c62789-8350-4c4c-8809-1851ca9c6475] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [88c62789-8350-4c4c-8809-1851ca9c6475] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.004042225s
addons_test.go:906: (dbg) Run:  kubectl --context addons-790770 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-arm64 -p addons-790770 ssh "cat /opt/local-path-provisioner/pvc-7efe29f9-361e-427c-a708-ae898111e7ca_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-790770 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-790770 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-790770 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-790770 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.282943992s)
--- PASS: TestAddons/parallel/LocalPath (53.34s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.53s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-t85g9" [fd20b544-75f0-46ac-beee-0f2d1020bcb4] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003676869s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-790770 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.53s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.75s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-6xw4p" [b4b4da69-0d36-464c-b4f4-25c2507f2e9c] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004308233s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-790770 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-790770 addons disable yakd --alsologtostderr -v=1: (5.74622333s)
--- PASS: TestAddons/parallel/Yakd (11.75s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.19s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-790770
addons_test.go:170: (dbg) Done: out/minikube-linux-arm64 stop -p addons-790770: (11.891458283s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-790770
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-790770
addons_test.go:183: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-790770
--- PASS: TestAddons/StoppedEnableDisable (12.19s)

                                                
                                    
x
+
TestCertOptions (33.52s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-637192 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-637192 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (30.689844523s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-637192 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-637192 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-637192 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-637192" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-637192
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-637192: (2.142679472s)
--- PASS: TestCertOptions (33.52s)

                                                
                                    
x
+
TestCertExpiration (248.9s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-084645 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-084645 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (40.771308898s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-084645 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-084645 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (25.435708132s)
helpers_test.go:175: Cleaning up "cert-expiration-084645" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-084645
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-084645: (2.688625395s)
--- PASS: TestCertExpiration (248.90s)

                                                
                                    
x
+
TestForceSystemdFlag (38.72s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-907514 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-907514 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (35.965922497s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-907514 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-907514" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-907514
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-907514: (2.444765789s)
--- PASS: TestForceSystemdFlag (38.72s)

                                                
                                    
x
+
TestForceSystemdEnv (46.05s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-123763 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-123763 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (43.477938748s)
helpers_test.go:175: Cleaning up "force-systemd-env-123763" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-123763
E0127 14:47:26.355725 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/addons-790770/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-123763: (2.574348751s)
--- PASS: TestForceSystemdEnv (46.05s)

                                                
                                    
x
+
TestErrorSpam/setup (30.27s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-248783 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-248783 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-248783 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-248783 --driver=docker  --container-runtime=crio: (30.267794804s)
--- PASS: TestErrorSpam/setup (30.27s)

                                                
                                    
x
+
TestErrorSpam/start (0.81s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-248783 --log_dir /tmp/nospam-248783 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-248783 --log_dir /tmp/nospam-248783 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-248783 --log_dir /tmp/nospam-248783 start --dry-run
--- PASS: TestErrorSpam/start (0.81s)

                                                
                                    
x
+
TestErrorSpam/status (1.07s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-248783 --log_dir /tmp/nospam-248783 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-248783 --log_dir /tmp/nospam-248783 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-248783 --log_dir /tmp/nospam-248783 status
--- PASS: TestErrorSpam/status (1.07s)

                                                
                                    
x
+
TestErrorSpam/pause (1.77s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-248783 --log_dir /tmp/nospam-248783 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-248783 --log_dir /tmp/nospam-248783 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-248783 --log_dir /tmp/nospam-248783 pause
--- PASS: TestErrorSpam/pause (1.77s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.87s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-248783 --log_dir /tmp/nospam-248783 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-248783 --log_dir /tmp/nospam-248783 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-248783 --log_dir /tmp/nospam-248783 unpause
--- PASS: TestErrorSpam/unpause (1.87s)

                                                
                                    
x
+
TestErrorSpam/stop (1.46s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-248783 --log_dir /tmp/nospam-248783 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-248783 --log_dir /tmp/nospam-248783 stop: (1.245814547s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-248783 --log_dir /tmp/nospam-248783 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-248783 --log_dir /tmp/nospam-248783 stop
--- PASS: TestErrorSpam/stop (1.46s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/20325-1178062/.minikube/files/etc/test/nested/copy/1183449/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (76.52s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-138053 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E0127 14:07:26.363096 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/addons-790770/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:07:26.376337 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/addons-790770/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:07:26.387671 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/addons-790770/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:07:26.409056 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/addons-790770/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:07:26.451520 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/addons-790770/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:07:26.533846 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/addons-790770/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:07:26.695155 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/addons-790770/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:07:27.016785 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/addons-790770/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:07:27.658793 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/addons-790770/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:07:28.940440 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/addons-790770/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:07:31.502040 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/addons-790770/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:07:36.623738 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/addons-790770/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:07:46.865353 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/addons-790770/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:08:07.346663 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/addons-790770/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-138053 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m16.517951132s)
--- PASS: TestFunctional/serial/StartWithProxy (76.52s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (27.35s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0127 14:08:36.603567 1183449 config.go:182] Loaded profile config "functional-138053": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-138053 --alsologtostderr -v=8
E0127 14:08:48.308973 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/addons-790770/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-138053 --alsologtostderr -v=8: (27.342794079s)
functional_test.go:663: soft start took 27.345450476s for "functional-138053" cluster.
I0127 14:09:03.946702 1183449 config.go:182] Loaded profile config "functional-138053": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestFunctional/serial/SoftStart (27.35s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-138053 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.43s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-138053 cache add registry.k8s.io/pause:3.1: (1.562679581s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-138053 cache add registry.k8s.io/pause:3.3: (1.474281151s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-138053 cache add registry.k8s.io/pause:latest: (1.391620917s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.43s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.43s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-138053 /tmp/TestFunctionalserialCacheCmdcacheadd_local3398128281/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 cache add minikube-local-cache-test:functional-138053
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 cache delete minikube-local-cache-test:functional-138053
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-138053
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.43s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-138053 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (285.948715ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-138053 cache reload: (1.248438297s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 kubectl -- --context functional-138053 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-138053 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.57s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-138053 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-138053 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.569767306s)
functional_test.go:761: restart took 41.569894396s for "functional-138053" cluster.
I0127 14:09:54.583156 1183449 config.go:182] Loaded profile config "functional-138053": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestFunctional/serial/ExtraConfig (41.57s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-138053 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.77s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-138053 logs: (1.767840548s)
--- PASS: TestFunctional/serial/LogsCmd (1.77s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.76s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 logs --file /tmp/TestFunctionalserialLogsFileCmd4010176607/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-138053 logs --file /tmp/TestFunctionalserialLogsFileCmd4010176607/001/logs.txt: (1.759057587s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.76s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.31s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-138053 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-138053
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-138053: exit status 115 (481.692211ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32696 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-138053 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.31s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-138053 config get cpus: exit status 14 (85.522564ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-138053 config get cpus: exit status 14 (95.004678ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-138053 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-138053 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1213472: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.07s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-138053 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-138053 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (255.207722ms)

                                                
                                                
-- stdout --
	* [functional-138053] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20325
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20325-1178062/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20325-1178062/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 14:10:46.943411 1212956 out.go:345] Setting OutFile to fd 1 ...
	I0127 14:10:46.943535 1212956 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:10:46.943540 1212956 out.go:358] Setting ErrFile to fd 2...
	I0127 14:10:46.943545 1212956 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:10:46.943824 1212956 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20325-1178062/.minikube/bin
	I0127 14:10:46.944197 1212956 out.go:352] Setting JSON to false
	I0127 14:10:46.945160 1212956 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":13998,"bootTime":1737973049,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0127 14:10:46.945229 1212956 start.go:139] virtualization:  
	I0127 14:10:46.954600 1212956 out.go:177] * [functional-138053] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0127 14:10:46.958212 1212956 out.go:177]   - MINIKUBE_LOCATION=20325
	I0127 14:10:46.959603 1212956 notify.go:220] Checking for updates...
	I0127 14:10:46.964248 1212956 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 14:10:46.970592 1212956 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20325-1178062/kubeconfig
	I0127 14:10:46.974064 1212956 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20325-1178062/.minikube
	I0127 14:10:46.978925 1212956 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0127 14:10:46.981845 1212956 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 14:10:46.985188 1212956 config.go:182] Loaded profile config "functional-138053": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 14:10:46.985726 1212956 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 14:10:47.013115 1212956 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0127 14:10:47.013238 1212956 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 14:10:47.095542 1212956 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:54 SystemTime:2025-01-27 14:10:47.085749955 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0127 14:10:47.095675 1212956 docker.go:318] overlay module found
	I0127 14:10:47.098808 1212956 out.go:177] * Using the docker driver based on existing profile
	I0127 14:10:47.102044 1212956 start.go:297] selected driver: docker
	I0127 14:10:47.102063 1212956 start.go:901] validating driver "docker" against &{Name:functional-138053 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-138053 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Moun
tOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 14:10:47.102178 1212956 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 14:10:47.105595 1212956 out.go:201] 
	W0127 14:10:47.108469 1212956 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0127 14:10:47.111356 1212956 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-138053 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-138053 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-138053 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (337.996424ms)

                                                
                                                
-- stdout --
	* [functional-138053] minikube v1.35.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20325
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20325-1178062/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20325-1178062/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 14:10:47.134886 1213005 out.go:345] Setting OutFile to fd 1 ...
	I0127 14:10:47.135115 1213005 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:10:47.135127 1213005 out.go:358] Setting ErrFile to fd 2...
	I0127 14:10:47.135133 1213005 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:10:47.137044 1213005 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20325-1178062/.minikube/bin
	I0127 14:10:47.137832 1213005 out.go:352] Setting JSON to false
	I0127 14:10:47.139609 1213005 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":13999,"bootTime":1737973049,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0127 14:10:47.139689 1213005 start.go:139] virtualization:  
	I0127 14:10:47.144822 1213005 out.go:177] * [functional-138053] minikube v1.35.0 sur Ubuntu 20.04 (arm64)
	I0127 14:10:47.149221 1213005 out.go:177]   - MINIKUBE_LOCATION=20325
	I0127 14:10:47.149266 1213005 notify.go:220] Checking for updates...
	I0127 14:10:47.154693 1213005 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 14:10:47.157653 1213005 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20325-1178062/kubeconfig
	I0127 14:10:47.160528 1213005 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20325-1178062/.minikube
	I0127 14:10:47.165618 1213005 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0127 14:10:47.168512 1213005 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 14:10:47.171971 1213005 config.go:182] Loaded profile config "functional-138053": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 14:10:47.172920 1213005 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 14:10:47.222208 1213005 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0127 14:10:47.222399 1213005 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 14:10:47.345555 1213005 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:54 SystemTime:2025-01-27 14:10:47.336582873 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0127 14:10:47.345661 1213005 docker.go:318] overlay module found
	I0127 14:10:47.348794 1213005 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0127 14:10:47.351711 1213005 start.go:297] selected driver: docker
	I0127 14:10:47.351727 1213005 start.go:901] validating driver "docker" against &{Name:functional-138053 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-138053 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Moun
tOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 14:10:47.351834 1213005 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 14:10:47.355276 1213005 out.go:201] 
	W0127 14:10:47.358313 1213005 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0127 14:10:47.363139 1213005 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-138053 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-138053 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-8449669db6-zmvgg" [4f49e422-2068-456f-9054-4b173d7317df] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-8449669db6-zmvgg" [4f49e422-2068-456f-9054-4b173d7317df] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.004232116s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:32327
functional_test.go:1675: http://192.168.49.2:32327: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-8449669db6-zmvgg

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32327
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.63s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [7e0f76bc-7f31-445b-9ac6-dd6e1d396193] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.012646218s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-138053 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-138053 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-138053 get pvc myclaim -o=json
I0127 14:10:24.720918 1183449 retry.go:31] will retry after 1.942095969s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:bf3f8d43-7924-443c-afa5-c590646b83ec ResourceVersion:747 Generation:0 CreationTimestamp:2025-01-27 14:10:24 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0x40014d7e10 VolumeMode:0x40014d7e30 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-138053 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-138053 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [84de0c99-edb5-46f6-892e-8489197f5821] Pending
helpers_test.go:344: "sp-pod" [84de0c99-edb5-46f6-892e-8489197f5821] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [84de0c99-edb5-46f6-892e-8489197f5821] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.004180291s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-138053 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-138053 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-138053 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [fef65bb2-856f-419f-8c81-8f93cfa95083] Pending
helpers_test.go:344: "sp-pod" [fef65bb2-856f-419f-8c81-8f93cfa95083] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [fef65bb2-856f-419f-8c81-8f93cfa95083] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.004601132s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-138053 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.78s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 ssh -n functional-138053 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 cp functional-138053:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1049257327/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 ssh -n functional-138053 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 ssh -n functional-138053 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.05s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1183449/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 ssh "sudo cat /etc/test/nested/copy/1183449/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1183449.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 ssh "sudo cat /etc/ssl/certs/1183449.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1183449.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 ssh "sudo cat /usr/share/ca-certificates/1183449.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/11834492.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 ssh "sudo cat /etc/ssl/certs/11834492.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/11834492.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 ssh "sudo cat /usr/share/ca-certificates/11834492.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.13s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-138053 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-138053 ssh "sudo systemctl is-active docker": exit status 1 (318.569758ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-138053 ssh "sudo systemctl is-active containerd": exit status 1 (369.006977ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-138053 version -o=json --components: (1.324250799s)
--- PASS: TestFunctional/parallel/Version/components (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-138053 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.1
registry.k8s.io/kube-proxy:v1.32.1
registry.k8s.io/kube-controller-manager:v1.32.1
registry.k8s.io/kube-apiserver:v1.32.1
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-138053
localhost/kicbase/echo-server:functional-138053
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20241108-5c6d2daf
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-138053 image ls --format short --alsologtostderr:
I0127 14:10:49.642023 1213513 out.go:345] Setting OutFile to fd 1 ...
I0127 14:10:49.642227 1213513 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 14:10:49.642272 1213513 out.go:358] Setting ErrFile to fd 2...
I0127 14:10:49.642294 1213513 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 14:10:49.642664 1213513 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20325-1178062/.minikube/bin
I0127 14:10:49.643732 1213513 config.go:182] Loaded profile config "functional-138053": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0127 14:10:49.643953 1213513 config.go:182] Loaded profile config "functional-138053": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0127 14:10:49.645772 1213513 cli_runner.go:164] Run: docker container inspect functional-138053 --format={{.State.Status}}
I0127 14:10:49.676418 1213513 ssh_runner.go:195] Run: systemctl --version
I0127 14:10:49.676472 1213513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-138053
I0127 14:10:49.695297 1213513 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33940 SSHKeyPath:/home/jenkins/minikube-integration/20325-1178062/.minikube/machines/functional-138053/id_rsa Username:docker}
I0127 14:10:49.789538 1213513 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-138053 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/library/nginx                 | alpine             | f9d642c42f7bc | 52.3MB |
| gcr.io/k8s-minikube/busybox             | latest             | 71a676dd070f4 | 1.63MB |
| localhost/minikube-local-cache-test     | functional-138053  | 5300d504009c4 | 3.33kB |
| registry.k8s.io/etcd                    | 3.5.16-0           | 7fc9d4aa817aa | 143MB  |
| registry.k8s.io/kube-controller-manager | v1.32.1            | 2933761aa7ada | 88.2MB |
| registry.k8s.io/kube-scheduler          | v1.32.1            | ddb38cac617cb | 69MB   |
| docker.io/kindest/kindnetd              | v20241108-5c6d2daf | 2be0bcf609c65 | 98.3MB |
| localhost/my-image                      | functional-138053  | 318d1119296be | 1.64MB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | 2f6c962e7b831 | 61.6MB |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| registry.k8s.io/pause                   | 3.10               | afb61768ce381 | 520kB  |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| localhost/kicbase/echo-server           | functional-138053  | ce2d2cda2d858 | 4.79MB |
| registry.k8s.io/kube-proxy              | v1.32.1            | e124fbed851d7 | 98.3MB |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
| docker.io/library/nginx                 | latest             | 781d902f1e046 | 201MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| registry.k8s.io/kube-apiserver          | v1.32.1            | 265c2dedf28ab | 95MB   |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-138053 image ls --format table --alsologtostderr:
I0127 14:10:56.462837 1213951 out.go:345] Setting OutFile to fd 1 ...
I0127 14:10:56.463023 1213951 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 14:10:56.463029 1213951 out.go:358] Setting ErrFile to fd 2...
I0127 14:10:56.463034 1213951 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 14:10:56.463290 1213951 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20325-1178062/.minikube/bin
I0127 14:10:56.463988 1213951 config.go:182] Loaded profile config "functional-138053": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0127 14:10:56.464100 1213951 config.go:182] Loaded profile config "functional-138053": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0127 14:10:56.464557 1213951 cli_runner.go:164] Run: docker container inspect functional-138053 --format={{.State.Status}}
I0127 14:10:56.489370 1213951 ssh_runner.go:195] Run: systemctl --version
I0127 14:10:56.489429 1213951 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-138053
I0127 14:10:56.511320 1213951 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33940 SSHKeyPath:/home/jenkins/minikube-integration/20325-1178062/.minikube/machines/functional-138053/id_rsa Username:docker}
I0127 14:10:56.614633 1213951 ssh_runner.go:195] Run: sudo crictl images --output json
2025/01/27 14:10:58 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-138053 image ls --format json --alsologtostderr:
[{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1634527"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],
"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:31440a2bef59e2f1ffb600113b557103740ff851e27b0aef5b849f6e3ab994a6","registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61647114"},{"id":"265c2dedf28ab9b88c7910c1643e210ad62483867f2bab88f56919a6e49a0d19","repoDigests":["registry.k8s.io/kube-apiserver@sha256:88154e5cc4415415c0cbfb49ad1d63ea2de74614b7b567d5f344c5bcb5c5f244","registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac"],"repoTags":["registry.k8s.io/kube-apiserver:v1.32.1"],"size":"94991840"},{"id":"2933761aa7adae93679cdde1c0bf457bd4dc4b53f95fc066a4c50aa9c375ea13","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:71478b03f55b6a17c25fee181fbaaafb7ac4f5314c4007eb0cf3d35fb20938e3
","registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.32.1"],"size":"88241478"},{"id":"2be0bcf609c6573ee83e676c747f31bda661ab2d4e039c51839e38fd258d2903","repoDigests":["docker.io/kindest/kindnetd@sha256:de216f6245e142905c8022d424959a65f798fcd26f5b7492b9c0b0391d735c3e","docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108"],"repoTags":["docker.io/kindest/kindnetd:v20241108-5c6d2daf"],"size":"98274354"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":
["localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["localhost/kicbase/echo-server:functional-138053"],"size":"4788229"},{"id":"5300d504009c4ae2ff44df6e961caa04989dc2c39a07baafa8918ddffa376551","repoDigests":["localhost/minikube-local-cache-test@sha256:4b9539c9aa3092d09d7856685a13b1c20cc86d6f0d4c9bd8573e0d24d9d9705b"],"repoTags":["localhost/minikube-local-cache-test:functional-138053"],"size":"3330"},{"id":"318d1119296be7b11ac9d83a43031e768d187bf6bbd12a41c7224a567decf5be","repoDigests":["localhost/my-image@sha256:7e643dbfa3210663945328867ae5a0384b5fa215bbc31f5de44e917d872c8515"],"repoTags":["localhost/my-image:functional-138053"],"size":"1640226"},{"id":"7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82","repoDigests":["registry.k8s.io/etcd@sha256:313adb872febec3a5740969d5bf1b1df0d222f8fa06675f34db5a7fc437356a1","registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5"],"repoTags":["registry.k
8s.io/etcd:3.5.16-0"],"size":"143226622"},{"id":"e124fbed851d756107a6153db4dc52269a2fd34af3cc46f00a2ef113f868aab0","repoDigests":["registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5","registry.k8s.io/kube-proxy@sha256:0d36a8e2f0f6a06753c1ae9949a9a4a58d752f8364fd2ab083fcd836c37f844d"],"repoTags":["registry.k8s.io/kube-proxy:v1.32.1"],"size":"98313623"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"781d902f1e046dcb5aba879a2371b2b6494f97bad89f65a2c7308e78f8087670","repoDigests":["docker.io/library/nginx@sha256:0a399eb16751829e1af26fea27b20c3ec28d7ab1fb72182879dcae1cca21206a","docker.io/library/nginx@sha256:5ad6d1fbf7a41cf81658450236559fd03a80f78e6a5ed21b08e373dec4948712"],"repoTags":["docker.io/library/nginx:latest"],"size":"201125287"},{"id":"f9d642c42f7bc79efd
0a3aa2b7fe913e0324a23c2f27c6b7f3f112473d47131d","repoDigests":["docker.io/library/nginx@sha256:4338a8ba9b9962d07e30e7ff4bbf27d62ee7523deb7205e8f0912169f1bbac10","docker.io/library/nginx@sha256:814a8e88df978ade80e584cc5b333144b9372a8e3c98872d07137dbf3b44d0e4"],"repoTags":["docker.io/library/nginx:alpine"],"size":"52333544"},{"id":"ddb38cac617cb18802e09e448db4b3aa70e9e469b02defa76e6de7192847a71c","repoDigests":["registry.k8s.io/kube-scheduler@sha256:244bf1ea0194bd050a2408898d65b6a3259624fdd5a3541788b40b4e94c02fc1","registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e"],"repoTags":["registry.k8s.io/kube-scheduler:v1.32.1"],"size":"68973892"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:e50b7059b633caf3c1449b8da680d11845cda4506b513ee7a2de00725f0a34a7","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"siz
e":"519877"},{"id":"e313a4c3c5088d83123d9105b7c2eefef7ca22f82a9b9f9a3a0fc4c3ed8c3154","repoDigests":["docker.io/library/903d991944864d3f3d3e537b891d4e69b77523a2218e597f580bb43c3cdf0034-tmp@sha256:e0b72080d3b6c0d48ad5a75a97f6ce0e79970cb90cfa0a7c0c63b3f9ad7fcd6f"],"repoTags":[],"size":"1637644"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"]
,"size":"87536549"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-138053 image ls --format json --alsologtostderr:
I0127 14:10:56.177581 1213918 out.go:345] Setting OutFile to fd 1 ...
I0127 14:10:56.177818 1213918 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 14:10:56.177846 1213918 out.go:358] Setting ErrFile to fd 2...
I0127 14:10:56.177867 1213918 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 14:10:56.178162 1213918 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20325-1178062/.minikube/bin
I0127 14:10:56.178903 1213918 config.go:182] Loaded profile config "functional-138053": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0127 14:10:56.179090 1213918 config.go:182] Loaded profile config "functional-138053": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0127 14:10:56.179623 1213918 cli_runner.go:164] Run: docker container inspect functional-138053 --format={{.State.Status}}
I0127 14:10:56.202808 1213918 ssh_runner.go:195] Run: systemctl --version
I0127 14:10:56.202863 1213918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-138053
I0127 14:10:56.223618 1213918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33940 SSHKeyPath:/home/jenkins/minikube-integration/20325-1178062/.minikube/machines/functional-138053/id_rsa Username:docker}
I0127 14:10:56.309390 1213918 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-138053 image ls --format yaml --alsologtostderr:
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: 265c2dedf28ab9b88c7910c1643e210ad62483867f2bab88f56919a6e49a0d19
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:88154e5cc4415415c0cbfb49ad1d63ea2de74614b7b567d5f344c5bcb5c5f244
- registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.1
size: "94991840"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:e50b7059b633caf3c1449b8da680d11845cda4506b513ee7a2de00725f0a34a7
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "519877"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 781d902f1e046dcb5aba879a2371b2b6494f97bad89f65a2c7308e78f8087670
repoDigests:
- docker.io/library/nginx@sha256:0a399eb16751829e1af26fea27b20c3ec28d7ab1fb72182879dcae1cca21206a
- docker.io/library/nginx@sha256:5ad6d1fbf7a41cf81658450236559fd03a80f78e6a5ed21b08e373dec4948712
repoTags:
- docker.io/library/nginx:latest
size: "201125287"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- localhost/kicbase/echo-server:functional-138053
size: "4788229"
- id: 7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82
repoDigests:
- registry.k8s.io/etcd@sha256:313adb872febec3a5740969d5bf1b1df0d222f8fa06675f34db5a7fc437356a1
- registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "143226622"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 2be0bcf609c6573ee83e676c747f31bda661ab2d4e039c51839e38fd258d2903
repoDigests:
- docker.io/kindest/kindnetd@sha256:de216f6245e142905c8022d424959a65f798fcd26f5b7492b9c0b0391d735c3e
- docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108
repoTags:
- docker.io/kindest/kindnetd:v20241108-5c6d2daf
size: "98274354"
- id: 2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:31440a2bef59e2f1ffb600113b557103740ff851e27b0aef5b849f6e3ab994a6
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61647114"
- id: 2933761aa7adae93679cdde1c0bf457bd4dc4b53f95fc066a4c50aa9c375ea13
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:71478b03f55b6a17c25fee181fbaaafb7ac4f5314c4007eb0cf3d35fb20938e3
- registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.1
size: "88241478"
- id: e124fbed851d756107a6153db4dc52269a2fd34af3cc46f00a2ef113f868aab0
repoDigests:
- registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5
- registry.k8s.io/kube-proxy@sha256:0d36a8e2f0f6a06753c1ae9949a9a4a58d752f8364fd2ab083fcd836c37f844d
repoTags:
- registry.k8s.io/kube-proxy:v1.32.1
size: "98313623"
- id: ddb38cac617cb18802e09e448db4b3aa70e9e469b02defa76e6de7192847a71c
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:244bf1ea0194bd050a2408898d65b6a3259624fdd5a3541788b40b4e94c02fc1
- registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.1
size: "68973892"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: f9d642c42f7bc79efd0a3aa2b7fe913e0324a23c2f27c6b7f3f112473d47131d
repoDigests:
- docker.io/library/nginx@sha256:4338a8ba9b9962d07e30e7ff4bbf27d62ee7523deb7205e8f0912169f1bbac10
- docker.io/library/nginx@sha256:814a8e88df978ade80e584cc5b333144b9372a8e3c98872d07137dbf3b44d0e4
repoTags:
- docker.io/library/nginx:alpine
size: "52333544"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 5300d504009c4ae2ff44df6e961caa04989dc2c39a07baafa8918ddffa376551
repoDigests:
- localhost/minikube-local-cache-test@sha256:4b9539c9aa3092d09d7856685a13b1c20cc86d6f0d4c9bd8573e0d24d9d9705b
repoTags:
- localhost/minikube-local-cache-test:functional-138053
size: "3330"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-138053 image ls --format yaml --alsologtostderr:
I0127 14:10:49.893536 1213546 out.go:345] Setting OutFile to fd 1 ...
I0127 14:10:49.893675 1213546 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 14:10:49.893686 1213546 out.go:358] Setting ErrFile to fd 2...
I0127 14:10:49.893691 1213546 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 14:10:49.894078 1213546 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20325-1178062/.minikube/bin
I0127 14:10:49.895168 1213546 config.go:182] Loaded profile config "functional-138053": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0127 14:10:49.895332 1213546 config.go:182] Loaded profile config "functional-138053": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0127 14:10:49.896579 1213546 cli_runner.go:164] Run: docker container inspect functional-138053 --format={{.State.Status}}
I0127 14:10:49.916425 1213546 ssh_runner.go:195] Run: systemctl --version
I0127 14:10:49.916475 1213546 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-138053
I0127 14:10:49.933987 1213546 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33940 SSHKeyPath:/home/jenkins/minikube-integration/20325-1178062/.minikube/machines/functional-138053/id_rsa Username:docker}
I0127 14:10:50.022753 1213546 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (6.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-138053 ssh pgrep buildkitd: exit status 1 (262.006256ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 image build -t localhost/my-image:functional-138053 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-138053 image build -t localhost/my-image:functional-138053 testdata/build --alsologtostderr: (5.459584917s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-arm64 -p functional-138053 image build -t localhost/my-image:functional-138053 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> e313a4c3c50
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-138053
--> 318d1119296
Successfully tagged localhost/my-image:functional-138053
318d1119296be7b11ac9d83a43031e768d187bf6bbd12a41c7224a567decf5be
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-138053 image build -t localhost/my-image:functional-138053 testdata/build --alsologtostderr:
I0127 14:10:50.405686 1213633 out.go:345] Setting OutFile to fd 1 ...
I0127 14:10:50.406400 1213633 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 14:10:50.406420 1213633 out.go:358] Setting ErrFile to fd 2...
I0127 14:10:50.406427 1213633 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 14:10:50.406719 1213633 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20325-1178062/.minikube/bin
I0127 14:10:50.407410 1213633 config.go:182] Loaded profile config "functional-138053": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0127 14:10:50.408076 1213633 config.go:182] Loaded profile config "functional-138053": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0127 14:10:50.408652 1213633 cli_runner.go:164] Run: docker container inspect functional-138053 --format={{.State.Status}}
I0127 14:10:50.426572 1213633 ssh_runner.go:195] Run: systemctl --version
I0127 14:10:50.426635 1213633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-138053
I0127 14:10:50.456661 1213633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33940 SSHKeyPath:/home/jenkins/minikube-integration/20325-1178062/.minikube/machines/functional-138053/id_rsa Username:docker}
I0127 14:10:50.545554 1213633 build_images.go:161] Building image from path: /tmp/build.3718004011.tar
I0127 14:10:50.545622 1213633 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0127 14:10:50.555077 1213633 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3718004011.tar
I0127 14:10:50.558756 1213633 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3718004011.tar: stat -c "%s %y" /var/lib/minikube/build/build.3718004011.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3718004011.tar': No such file or directory
I0127 14:10:50.558786 1213633 ssh_runner.go:362] scp /tmp/build.3718004011.tar --> /var/lib/minikube/build/build.3718004011.tar (3072 bytes)
I0127 14:10:50.584285 1213633 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3718004011
I0127 14:10:50.593809 1213633 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3718004011 -xf /var/lib/minikube/build/build.3718004011.tar
I0127 14:10:50.603384 1213633 crio.go:315] Building image: /var/lib/minikube/build/build.3718004011
I0127 14:10:50.603467 1213633 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-138053 /var/lib/minikube/build/build.3718004011 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I0127 14:10:55.768174 1213633 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-138053 /var/lib/minikube/build/build.3718004011 --cgroup-manager=cgroupfs: (5.164684981s)
I0127 14:10:55.768246 1213633 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3718004011
I0127 14:10:55.777743 1213633 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3718004011.tar
I0127 14:10:55.787985 1213633 build_images.go:217] Built localhost/my-image:functional-138053 from /tmp/build.3718004011.tar
I0127 14:10:55.788016 1213633 build_images.go:133] succeeded building to: functional-138053
I0127 14:10:55.788022 1213633 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (6.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-138053
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 image load --daemon kicbase/echo-server:functional-138053 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-138053 image load --daemon kicbase/echo-server:functional-138053 --alsologtostderr: (1.345432462s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 image load --daemon kicbase/echo-server:functional-138053 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-138053 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-138053 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64fc58db8c-8n6zh" [47d7f4c1-6ca4-4280-bd2c-00d6463ece80] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64fc58db8c-8n6zh" [47d7f4c1-6ca4-4280-bd2c-00d6463ece80] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.006171635s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-138053
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 image load --daemon kicbase/echo-server:functional-138053 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 image save kicbase/echo-server:functional-138053 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
E0127 14:10:10.230518 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/addons-790770/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:380: (dbg) Done: out/minikube-linux-arm64 -p functional-138053 image save kicbase/echo-server:functional-138053 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr: (2.24619886s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 image rm kicbase/echo-server:functional-138053 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-138053
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 image save --daemon kicbase/echo-server:functional-138053 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-138053
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-138053 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-138053 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-138053 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1209969: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-138053 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-138053 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-138053 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [29df1ece-58e6-44c3-902d-ef4abcc211a5] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [29df1ece-58e6-44c3-902d-ef4abcc211a5] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.00335868s
I0127 14:10:23.763483 1183449 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 service list -o json
functional_test.go:1494: Took "359.30544ms" to run "out/minikube-linux-arm64 -p functional-138053 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:30659
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:30659
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-138053 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.100.34.45 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-138053 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "346.072969ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "63.219622ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "372.285885ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "61.964528ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-138053 /tmp/TestFunctionalparallelMountCmdany-port125085557/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1737987035031880234" to /tmp/TestFunctionalparallelMountCmdany-port125085557/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1737987035031880234" to /tmp/TestFunctionalparallelMountCmdany-port125085557/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1737987035031880234" to /tmp/TestFunctionalparallelMountCmdany-port125085557/001/test-1737987035031880234
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-138053 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (325.672746ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0127 14:10:35.357845 1183449 retry.go:31] will retry after 462.269024ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan 27 14:10 created-by-test
-rw-r--r-- 1 docker docker 24 Jan 27 14:10 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan 27 14:10 test-1737987035031880234
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 ssh cat /mount-9p/test-1737987035031880234
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-138053 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [b7a4cacf-9f16-4e0e-9e85-74b0d9d16a2a] Pending
helpers_test.go:344: "busybox-mount" [b7a4cacf-9f16-4e0e-9e85-74b0d9d16a2a] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [b7a4cacf-9f16-4e0e-9e85-74b0d9d16a2a] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [b7a4cacf-9f16-4e0e-9e85-74b0d9d16a2a] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003593414s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-138053 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-138053 /tmp/TestFunctionalparallelMountCmdany-port125085557/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.82s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-138053 /tmp/TestFunctionalparallelMountCmdspecific-port2297434901/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-138053 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (349.702089ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0127 14:10:43.199422 1183449 retry.go:31] will retry after 577.522468ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-138053 /tmp/TestFunctionalparallelMountCmdspecific-port2297434901/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-138053 ssh "sudo umount -f /mount-9p": exit status 1 (259.933205ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-138053 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-138053 /tmp/TestFunctionalparallelMountCmdspecific-port2297434901/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.94s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-138053 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3167852061/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-138053 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3167852061/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-138053 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3167852061/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-138053 ssh "findmnt -T" /mount1: exit status 1 (603.768948ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0127 14:10:45.399666 1183449 retry.go:31] will retry after 569.677857ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-138053 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-138053 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-138053 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3167852061/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-138053 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3167852061/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-138053 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3167852061/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.08s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-138053
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-138053
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-138053
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (175.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-789594 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0127 14:12:26.355390 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/addons-790770/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:12:54.073662 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/addons-790770/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-789594 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (2m54.437426341s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-789594 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (175.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (9.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-789594 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-789594 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-789594 -- rollout status deployment/busybox: (6.428618425s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-789594 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-789594 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-789594 -- exec busybox-58667487b6-k247k -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-789594 -- exec busybox-58667487b6-kxg48 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-789594 -- exec busybox-58667487b6-sk847 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-789594 -- exec busybox-58667487b6-k247k -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-789594 -- exec busybox-58667487b6-kxg48 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-789594 -- exec busybox-58667487b6-sk847 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-789594 -- exec busybox-58667487b6-k247k -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-789594 -- exec busybox-58667487b6-kxg48 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-789594 -- exec busybox-58667487b6-sk847 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (9.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-789594 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-789594 -- exec busybox-58667487b6-k247k -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-789594 -- exec busybox-58667487b6-k247k -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-789594 -- exec busybox-58667487b6-kxg48 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-789594 -- exec busybox-58667487b6-kxg48 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-789594 -- exec busybox-58667487b6-sk847 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-789594 -- exec busybox-58667487b6-sk847 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (33.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-789594 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-789594 -v=7 --alsologtostderr: (32.164732616s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-789594 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (33.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-789594 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.001960449s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-789594 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-789594 cp testdata/cp-test.txt ha-789594:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-789594 ssh -n ha-789594 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-789594 cp ha-789594:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3575062353/001/cp-test_ha-789594.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-789594 ssh -n ha-789594 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-789594 cp ha-789594:/home/docker/cp-test.txt ha-789594-m02:/home/docker/cp-test_ha-789594_ha-789594-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-789594 ssh -n ha-789594 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-789594 ssh -n ha-789594-m02 "sudo cat /home/docker/cp-test_ha-789594_ha-789594-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-789594 cp ha-789594:/home/docker/cp-test.txt ha-789594-m03:/home/docker/cp-test_ha-789594_ha-789594-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-789594 ssh -n ha-789594 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-789594 ssh -n ha-789594-m03 "sudo cat /home/docker/cp-test_ha-789594_ha-789594-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-789594 cp ha-789594:/home/docker/cp-test.txt ha-789594-m04:/home/docker/cp-test_ha-789594_ha-789594-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-789594 ssh -n ha-789594 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-789594 ssh -n ha-789594-m04 "sudo cat /home/docker/cp-test_ha-789594_ha-789594-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-789594 cp testdata/cp-test.txt ha-789594-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-789594 ssh -n ha-789594-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-789594 cp ha-789594-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3575062353/001/cp-test_ha-789594-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-789594 ssh -n ha-789594-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-789594 cp ha-789594-m02:/home/docker/cp-test.txt ha-789594:/home/docker/cp-test_ha-789594-m02_ha-789594.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-789594 ssh -n ha-789594-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-789594 ssh -n ha-789594 "sudo cat /home/docker/cp-test_ha-789594-m02_ha-789594.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-789594 cp ha-789594-m02:/home/docker/cp-test.txt ha-789594-m03:/home/docker/cp-test_ha-789594-m02_ha-789594-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-789594 ssh -n ha-789594-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-789594 ssh -n ha-789594-m03 "sudo cat /home/docker/cp-test_ha-789594-m02_ha-789594-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-789594 cp ha-789594-m02:/home/docker/cp-test.txt ha-789594-m04:/home/docker/cp-test_ha-789594-m02_ha-789594-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-789594 ssh -n ha-789594-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-789594 ssh -n ha-789594-m04 "sudo cat /home/docker/cp-test_ha-789594-m02_ha-789594-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-789594 cp testdata/cp-test.txt ha-789594-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-789594 ssh -n ha-789594-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-789594 cp ha-789594-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3575062353/001/cp-test_ha-789594-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-789594 ssh -n ha-789594-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-789594 cp ha-789594-m03:/home/docker/cp-test.txt ha-789594:/home/docker/cp-test_ha-789594-m03_ha-789594.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-789594 ssh -n ha-789594-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-789594 ssh -n ha-789594 "sudo cat /home/docker/cp-test_ha-789594-m03_ha-789594.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-789594 cp ha-789594-m03:/home/docker/cp-test.txt ha-789594-m02:/home/docker/cp-test_ha-789594-m03_ha-789594-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-789594 ssh -n ha-789594-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-789594 ssh -n ha-789594-m02 "sudo cat /home/docker/cp-test_ha-789594-m03_ha-789594-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-789594 cp ha-789594-m03:/home/docker/cp-test.txt ha-789594-m04:/home/docker/cp-test_ha-789594-m03_ha-789594-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-789594 ssh -n ha-789594-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-789594 ssh -n ha-789594-m04 "sudo cat /home/docker/cp-test_ha-789594-m03_ha-789594-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-789594 cp testdata/cp-test.txt ha-789594-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-789594 ssh -n ha-789594-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-789594 cp ha-789594-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3575062353/001/cp-test_ha-789594-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-789594 ssh -n ha-789594-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-789594 cp ha-789594-m04:/home/docker/cp-test.txt ha-789594:/home/docker/cp-test_ha-789594-m04_ha-789594.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-789594 ssh -n ha-789594-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-789594 ssh -n ha-789594 "sudo cat /home/docker/cp-test_ha-789594-m04_ha-789594.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-789594 cp ha-789594-m04:/home/docker/cp-test.txt ha-789594-m02:/home/docker/cp-test_ha-789594-m04_ha-789594-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-789594 ssh -n ha-789594-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-789594 ssh -n ha-789594-m02 "sudo cat /home/docker/cp-test_ha-789594-m04_ha-789594-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-789594 cp ha-789594-m04:/home/docker/cp-test.txt ha-789594-m03:/home/docker/cp-test_ha-789594-m04_ha-789594-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-789594 ssh -n ha-789594-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-789594 ssh -n ha-789594-m03 "sudo cat /home/docker/cp-test_ha-789594-m04_ha-789594-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-789594 node stop m02 -v=7 --alsologtostderr
E0127 14:15:07.387509 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/functional-138053/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:15:07.393941 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/functional-138053/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:15:07.405399 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/functional-138053/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:15:07.426927 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/functional-138053/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:15:07.468311 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/functional-138053/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:15:07.549818 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/functional-138053/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:15:07.711314 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/functional-138053/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:15:08.032981 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/functional-138053/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:15:08.674941 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/functional-138053/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:15:09.956344 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/functional-138053/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:15:12.518857 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/functional-138053/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-789594 node stop m02 -v=7 --alsologtostderr: (11.994510997s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-789594 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-789594 status -v=7 --alsologtostderr: exit status 7 (710.527325ms)

                                                
                                                
-- stdout --
	ha-789594
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-789594-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-789594-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-789594-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 14:15:13.779712 1229620 out.go:345] Setting OutFile to fd 1 ...
	I0127 14:15:13.779837 1229620 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:15:13.779848 1229620 out.go:358] Setting ErrFile to fd 2...
	I0127 14:15:13.779854 1229620 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:15:13.780166 1229620 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20325-1178062/.minikube/bin
	I0127 14:15:13.780354 1229620 out.go:352] Setting JSON to false
	I0127 14:15:13.780416 1229620 mustload.go:65] Loading cluster: ha-789594
	I0127 14:15:13.780495 1229620 notify.go:220] Checking for updates...
	I0127 14:15:13.781134 1229620 config.go:182] Loaded profile config "ha-789594": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 14:15:13.781156 1229620 status.go:174] checking status of ha-789594 ...
	I0127 14:15:13.782301 1229620 cli_runner.go:164] Run: docker container inspect ha-789594 --format={{.State.Status}}
	I0127 14:15:13.805258 1229620 status.go:371] ha-789594 host status = "Running" (err=<nil>)
	I0127 14:15:13.805283 1229620 host.go:66] Checking if "ha-789594" exists ...
	I0127 14:15:13.805590 1229620 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-789594
	I0127 14:15:13.832976 1229620 host.go:66] Checking if "ha-789594" exists ...
	I0127 14:15:13.833279 1229620 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 14:15:13.833333 1229620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-789594
	I0127 14:15:13.852174 1229620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33945 SSHKeyPath:/home/jenkins/minikube-integration/20325-1178062/.minikube/machines/ha-789594/id_rsa Username:docker}
	I0127 14:15:13.942284 1229620 ssh_runner.go:195] Run: systemctl --version
	I0127 14:15:13.946829 1229620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 14:15:13.958208 1229620 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 14:15:14.013901 1229620 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:true NGoroutines:73 SystemTime:2025-01-27 14:15:14.001900975 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0127 14:15:14.014518 1229620 kubeconfig.go:125] found "ha-789594" server: "https://192.168.49.254:8443"
	I0127 14:15:14.014558 1229620 api_server.go:166] Checking apiserver status ...
	I0127 14:15:14.014608 1229620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:15:14.026957 1229620 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1407/cgroup
	I0127 14:15:14.037036 1229620 api_server.go:182] apiserver freezer: "12:freezer:/docker/3f0d086113715ab81ab3bc5178268d57aa273b5fab223310a28bb9a64b8c5cca/crio/crio-18e2b08f10d75f29e1e68bd0b56c3bb5e8d8063aa4e7acdd2095ce34725810be"
	I0127 14:15:14.037106 1229620 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/3f0d086113715ab81ab3bc5178268d57aa273b5fab223310a28bb9a64b8c5cca/crio/crio-18e2b08f10d75f29e1e68bd0b56c3bb5e8d8063aa4e7acdd2095ce34725810be/freezer.state
	I0127 14:15:14.046758 1229620 api_server.go:204] freezer state: "THAWED"
	I0127 14:15:14.046785 1229620 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0127 14:15:14.055578 1229620 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0127 14:15:14.055640 1229620 status.go:463] ha-789594 apiserver status = Running (err=<nil>)
	I0127 14:15:14.055654 1229620 status.go:176] ha-789594 status: &{Name:ha-789594 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 14:15:14.055672 1229620 status.go:174] checking status of ha-789594-m02 ...
	I0127 14:15:14.056014 1229620 cli_runner.go:164] Run: docker container inspect ha-789594-m02 --format={{.State.Status}}
	I0127 14:15:14.073110 1229620 status.go:371] ha-789594-m02 host status = "Stopped" (err=<nil>)
	I0127 14:15:14.073136 1229620 status.go:384] host is not running, skipping remaining checks
	I0127 14:15:14.073143 1229620 status.go:176] ha-789594-m02 status: &{Name:ha-789594-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 14:15:14.073173 1229620 status.go:174] checking status of ha-789594-m03 ...
	I0127 14:15:14.073500 1229620 cli_runner.go:164] Run: docker container inspect ha-789594-m03 --format={{.State.Status}}
	I0127 14:15:14.092368 1229620 status.go:371] ha-789594-m03 host status = "Running" (err=<nil>)
	I0127 14:15:14.092395 1229620 host.go:66] Checking if "ha-789594-m03" exists ...
	I0127 14:15:14.092709 1229620 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-789594-m03
	I0127 14:15:14.109873 1229620 host.go:66] Checking if "ha-789594-m03" exists ...
	I0127 14:15:14.110212 1229620 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 14:15:14.110256 1229620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-789594-m03
	I0127 14:15:14.130604 1229620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33955 SSHKeyPath:/home/jenkins/minikube-integration/20325-1178062/.minikube/machines/ha-789594-m03/id_rsa Username:docker}
	I0127 14:15:14.222549 1229620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 14:15:14.235638 1229620 kubeconfig.go:125] found "ha-789594" server: "https://192.168.49.254:8443"
	I0127 14:15:14.235668 1229620 api_server.go:166] Checking apiserver status ...
	I0127 14:15:14.235741 1229620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:15:14.247103 1229620 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1348/cgroup
	I0127 14:15:14.257115 1229620 api_server.go:182] apiserver freezer: "12:freezer:/docker/55a46962f3bda73c6bde50ae177368b25855990addb466c3706026b0ffc64811/crio/crio-0e53ea49c1cb050e9623e3227d35b823a965997fe2aeff4cae0faaff59a1a634"
	I0127 14:15:14.257189 1229620 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/55a46962f3bda73c6bde50ae177368b25855990addb466c3706026b0ffc64811/crio/crio-0e53ea49c1cb050e9623e3227d35b823a965997fe2aeff4cae0faaff59a1a634/freezer.state
	I0127 14:15:14.266033 1229620 api_server.go:204] freezer state: "THAWED"
	I0127 14:15:14.266063 1229620 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0127 14:15:14.275293 1229620 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0127 14:15:14.275336 1229620 status.go:463] ha-789594-m03 apiserver status = Running (err=<nil>)
	I0127 14:15:14.275345 1229620 status.go:176] ha-789594-m03 status: &{Name:ha-789594-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 14:15:14.275391 1229620 status.go:174] checking status of ha-789594-m04 ...
	I0127 14:15:14.275738 1229620 cli_runner.go:164] Run: docker container inspect ha-789594-m04 --format={{.State.Status}}
	I0127 14:15:14.293649 1229620 status.go:371] ha-789594-m04 host status = "Running" (err=<nil>)
	I0127 14:15:14.293676 1229620 host.go:66] Checking if "ha-789594-m04" exists ...
	I0127 14:15:14.293996 1229620 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-789594-m04
	I0127 14:15:14.310642 1229620 host.go:66] Checking if "ha-789594-m04" exists ...
	I0127 14:15:14.310951 1229620 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 14:15:14.311011 1229620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-789594-m04
	I0127 14:15:14.329620 1229620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33960 SSHKeyPath:/home/jenkins/minikube-integration/20325-1178062/.minikube/machines/ha-789594-m04/id_rsa Username:docker}
	I0127 14:15:14.418770 1229620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 14:15:14.430748 1229620 status.go:176] ha-789594-m04 status: &{Name:ha-789594-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (24.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-789594 node start m02 -v=7 --alsologtostderr
E0127 14:15:17.641161 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/functional-138053/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:15:27.883141 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/functional-138053/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-789594 node start m02 -v=7 --alsologtostderr: (23.478959345s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-789594 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-789594 status -v=7 --alsologtostderr: (1.316723492s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (24.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.338382962s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (171.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-789594 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-789594 -v=7 --alsologtostderr
E0127 14:15:48.365086 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/functional-138053/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 stop -p ha-789594 -v=7 --alsologtostderr: (37.103933405s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 start -p ha-789594 --wait=true -v=7 --alsologtostderr
E0127 14:16:29.327084 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/functional-138053/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:17:26.355439 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/addons-790770/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:17:51.249142 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/functional-138053/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 start -p ha-789594 --wait=true -v=7 --alsologtostderr: (2m14.010862677s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-789594
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (171.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-789594 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-789594 node delete m03 -v=7 --alsologtostderr: (11.631007304s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-789594 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-789594 stop -v=7 --alsologtostderr
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-789594 stop -v=7 --alsologtostderr: (35.660764991s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-789594 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-789594 status -v=7 --alsologtostderr: exit status 7 (117.551541ms)

                                                
                                                
-- stdout --
	ha-789594
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-789594-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-789594-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 14:19:21.869534 1243592 out.go:345] Setting OutFile to fd 1 ...
	I0127 14:19:21.869675 1243592 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:19:21.869686 1243592 out.go:358] Setting ErrFile to fd 2...
	I0127 14:19:21.869692 1243592 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:19:21.869918 1243592 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20325-1178062/.minikube/bin
	I0127 14:19:21.870097 1243592 out.go:352] Setting JSON to false
	I0127 14:19:21.870134 1243592 mustload.go:65] Loading cluster: ha-789594
	I0127 14:19:21.870191 1243592 notify.go:220] Checking for updates...
	I0127 14:19:21.870554 1243592 config.go:182] Loaded profile config "ha-789594": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 14:19:21.870568 1243592 status.go:174] checking status of ha-789594 ...
	I0127 14:19:21.871133 1243592 cli_runner.go:164] Run: docker container inspect ha-789594 --format={{.State.Status}}
	I0127 14:19:21.892477 1243592 status.go:371] ha-789594 host status = "Stopped" (err=<nil>)
	I0127 14:19:21.892505 1243592 status.go:384] host is not running, skipping remaining checks
	I0127 14:19:21.892512 1243592 status.go:176] ha-789594 status: &{Name:ha-789594 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 14:19:21.892547 1243592 status.go:174] checking status of ha-789594-m02 ...
	I0127 14:19:21.892904 1243592 cli_runner.go:164] Run: docker container inspect ha-789594-m02 --format={{.State.Status}}
	I0127 14:19:21.917315 1243592 status.go:371] ha-789594-m02 host status = "Stopped" (err=<nil>)
	I0127 14:19:21.917338 1243592 status.go:384] host is not running, skipping remaining checks
	I0127 14:19:21.917345 1243592 status.go:176] ha-789594-m02 status: &{Name:ha-789594-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 14:19:21.917364 1243592 status.go:174] checking status of ha-789594-m04 ...
	I0127 14:19:21.917658 1243592 cli_runner.go:164] Run: docker container inspect ha-789594-m04 --format={{.State.Status}}
	I0127 14:19:21.937947 1243592 status.go:371] ha-789594-m04 host status = "Stopped" (err=<nil>)
	I0127 14:19:21.937970 1243592 status.go:384] host is not running, skipping remaining checks
	I0127 14:19:21.937977 1243592 status.go:176] ha-789594-m04 status: &{Name:ha-789594-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (94.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 start -p ha-789594 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0127 14:20:07.387389 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/functional-138053/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:20:35.091079 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/functional-138053/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 start -p ha-789594 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (1m33.699695456s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-789594 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (94.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (72.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-789594 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 node add -p ha-789594 --control-plane -v=7 --alsologtostderr: (1m11.337747216s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-789594 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (72.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.001362209s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.00s)

                                                
                                    
x
+
TestJSONOutput/start/Command (81.52s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-599815 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E0127 14:22:26.355018 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/addons-790770/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-599815 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (1m21.511277039s)
--- PASS: TestJSONOutput/start/Command (81.52s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.77s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-599815 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.77s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.69s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-599815 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.69s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.86s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-599815 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-599815 --output=json --user=testUser: (5.857234462s)
--- PASS: TestJSONOutput/stop/Command (5.86s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-697697 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-697697 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (99.8601ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"64beb1bd-a5f7-4480-9544-24ada2c6a4a3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-697697] minikube v1.35.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"bd66225c-720f-425e-867e-397dab857b88","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20325"}}
	{"specversion":"1.0","id":"9d6f26ea-a84d-4812-9503-4c46dbbf03ed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"086cc2f0-bacb-4328-8bd5-eba33f2738f9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20325-1178062/kubeconfig"}}
	{"specversion":"1.0","id":"f20ed179-2818-4526-abc1-2186127e36e9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20325-1178062/.minikube"}}
	{"specversion":"1.0","id":"5d35e89f-5430-4783-9199-27509ae63694","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"692a2683-fdfb-4a15-b5c1-bda70fe13cf2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"9f357f50-660d-4ccb-8a42-98d4ab369a82","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-697697" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-697697
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (39.14s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-782255 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-782255 --network=: (37.039641653s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-782255" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-782255
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-782255: (2.079567858s)
--- PASS: TestKicCustomNetwork/create_custom_network (39.14s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (35.86s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-181929 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-181929 --network=bridge: (33.838573154s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-181929" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-181929
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-181929: (1.995964492s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (35.86s)

                                                
                                    
x
+
TestKicExistingNetwork (35.73s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0127 14:25:07.255812 1183449 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0127 14:25:07.272043 1183449 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0127 14:25:07.272135 1183449 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0127 14:25:07.272155 1183449 cli_runner.go:164] Run: docker network inspect existing-network
W0127 14:25:07.288685 1183449 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0127 14:25:07.288717 1183449 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0127 14:25:07.288735 1183449 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0127 14:25:07.288974 1183449 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0127 14:25:07.307241 1183449 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-16672da10350 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:16:22:ea:aa} reservation:<nil>}
I0127 14:25:07.307693 1183449 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400191d4e0}
I0127 14:25:07.307789 1183449 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0127 14:25:07.307854 1183449 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0127 14:25:07.377745 1183449 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
E0127 14:25:07.387489 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/functional-138053/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-771789 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-771789 --network=existing-network: (33.581198623s)
helpers_test.go:175: Cleaning up "existing-network-771789" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-771789
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-771789: (1.995204824s)
I0127 14:25:42.971031 1183449 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (35.73s)

                                                
                                    
x
+
TestKicCustomSubnet (34.95s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-494566 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-494566 --subnet=192.168.60.0/24: (32.84042007s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-494566 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-494566" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-494566
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-494566: (2.075562785s)
--- PASS: TestKicCustomSubnet (34.95s)

                                                
                                    
x
+
TestKicStaticIP (34.6s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-231921 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-231921 --static-ip=192.168.200.200: (32.309064855s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-231921 ip
helpers_test.go:175: Cleaning up "static-ip-231921" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-231921
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-231921: (2.128440586s)
--- PASS: TestKicStaticIP (34.60s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (66.65s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-378949 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-378949 --driver=docker  --container-runtime=crio: (30.048464492s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-381525 --driver=docker  --container-runtime=crio
E0127 14:27:26.354832 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/addons-790770/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-381525 --driver=docker  --container-runtime=crio: (30.899781805s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-378949
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-381525
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-381525" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-381525
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-381525: (1.992263137s)
helpers_test.go:175: Cleaning up "first-378949" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-378949
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-378949: (2.290349502s)
--- PASS: TestMinikubeProfile (66.65s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.89s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-199010 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-199010 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.890620556s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.89s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-199010 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.63s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-200840 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-200840 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.631477639s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.63s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-200840 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-199010 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-199010 --alsologtostderr -v=5: (1.63480823s)
--- PASS: TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-200840 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-200840
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-200840: (1.206162016s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.62s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-200840
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-200840: (6.618879116s)
--- PASS: TestMountStart/serial/RestartStopped (7.62s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-200840 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (103.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-974434 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0127 14:30:07.387139 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/functional-138053/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-974434 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m42.87791037s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-974434 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (103.37s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-974434 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-974434 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-974434 -- rollout status deployment/busybox: (4.793712729s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-974434 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-974434 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-974434 -- exec busybox-58667487b6-57j6t -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-974434 -- exec busybox-58667487b6-sk89v -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-974434 -- exec busybox-58667487b6-57j6t -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-974434 -- exec busybox-58667487b6-sk89v -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-974434 -- exec busybox-58667487b6-57j6t -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-974434 -- exec busybox-58667487b6-sk89v -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.65s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-974434 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-974434 -- exec busybox-58667487b6-57j6t -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-974434 -- exec busybox-58667487b6-57j6t -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-974434 -- exec busybox-58667487b6-sk89v -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-974434 -- exec busybox-58667487b6-sk89v -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.02s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (27.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-974434 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-974434 -v 3 --alsologtostderr: (27.045051997s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-974434 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (27.71s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-974434 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.74s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-974434 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-974434 cp testdata/cp-test.txt multinode-974434:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-974434 ssh -n multinode-974434 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-974434 cp multinode-974434:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3664779277/001/cp-test_multinode-974434.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-974434 ssh -n multinode-974434 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-974434 cp multinode-974434:/home/docker/cp-test.txt multinode-974434-m02:/home/docker/cp-test_multinode-974434_multinode-974434-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-974434 ssh -n multinode-974434 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-974434 ssh -n multinode-974434-m02 "sudo cat /home/docker/cp-test_multinode-974434_multinode-974434-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-974434 cp multinode-974434:/home/docker/cp-test.txt multinode-974434-m03:/home/docker/cp-test_multinode-974434_multinode-974434-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-974434 ssh -n multinode-974434 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-974434 ssh -n multinode-974434-m03 "sudo cat /home/docker/cp-test_multinode-974434_multinode-974434-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-974434 cp testdata/cp-test.txt multinode-974434-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-974434 ssh -n multinode-974434-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-974434 cp multinode-974434-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3664779277/001/cp-test_multinode-974434-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-974434 ssh -n multinode-974434-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-974434 cp multinode-974434-m02:/home/docker/cp-test.txt multinode-974434:/home/docker/cp-test_multinode-974434-m02_multinode-974434.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-974434 ssh -n multinode-974434-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-974434 ssh -n multinode-974434 "sudo cat /home/docker/cp-test_multinode-974434-m02_multinode-974434.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-974434 cp multinode-974434-m02:/home/docker/cp-test.txt multinode-974434-m03:/home/docker/cp-test_multinode-974434-m02_multinode-974434-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-974434 ssh -n multinode-974434-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-974434 ssh -n multinode-974434-m03 "sudo cat /home/docker/cp-test_multinode-974434-m02_multinode-974434-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-974434 cp testdata/cp-test.txt multinode-974434-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-974434 ssh -n multinode-974434-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-974434 cp multinode-974434-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3664779277/001/cp-test_multinode-974434-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-974434 ssh -n multinode-974434-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-974434 cp multinode-974434-m03:/home/docker/cp-test.txt multinode-974434:/home/docker/cp-test_multinode-974434-m03_multinode-974434.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-974434 ssh -n multinode-974434-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-974434 ssh -n multinode-974434 "sudo cat /home/docker/cp-test_multinode-974434-m03_multinode-974434.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-974434 cp multinode-974434-m03:/home/docker/cp-test.txt multinode-974434-m02:/home/docker/cp-test_multinode-974434-m03_multinode-974434-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-974434 ssh -n multinode-974434-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-974434 ssh -n multinode-974434-m02 "sudo cat /home/docker/cp-test_multinode-974434-m03_multinode-974434-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.09s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-974434 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-974434 node stop m03: (1.21257451s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-974434 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-974434 status: exit status 7 (520.654465ms)

                                                
                                                
-- stdout --
	multinode-974434
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-974434-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-974434-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-974434 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-974434 status --alsologtostderr: exit status 7 (497.153492ms)

                                                
                                                
-- stdout --
	multinode-974434
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-974434-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-974434-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 14:30:57.555337 1297348 out.go:345] Setting OutFile to fd 1 ...
	I0127 14:30:57.555542 1297348 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:30:57.555569 1297348 out.go:358] Setting ErrFile to fd 2...
	I0127 14:30:57.555598 1297348 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:30:57.555937 1297348 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20325-1178062/.minikube/bin
	I0127 14:30:57.556176 1297348 out.go:352] Setting JSON to false
	I0127 14:30:57.556265 1297348 mustload.go:65] Loading cluster: multinode-974434
	I0127 14:30:57.556370 1297348 notify.go:220] Checking for updates...
	I0127 14:30:57.556939 1297348 config.go:182] Loaded profile config "multinode-974434": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 14:30:57.557000 1297348 status.go:174] checking status of multinode-974434 ...
	I0127 14:30:57.557656 1297348 cli_runner.go:164] Run: docker container inspect multinode-974434 --format={{.State.Status}}
	I0127 14:30:57.577367 1297348 status.go:371] multinode-974434 host status = "Running" (err=<nil>)
	I0127 14:30:57.577413 1297348 host.go:66] Checking if "multinode-974434" exists ...
	I0127 14:30:57.577773 1297348 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-974434
	I0127 14:30:57.608259 1297348 host.go:66] Checking if "multinode-974434" exists ...
	I0127 14:30:57.608622 1297348 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 14:30:57.608694 1297348 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-974434
	I0127 14:30:57.627007 1297348 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34065 SSHKeyPath:/home/jenkins/minikube-integration/20325-1178062/.minikube/machines/multinode-974434/id_rsa Username:docker}
	I0127 14:30:57.713847 1297348 ssh_runner.go:195] Run: systemctl --version
	I0127 14:30:57.718077 1297348 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 14:30:57.729550 1297348 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 14:30:57.784402 1297348 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:63 SystemTime:2025-01-27 14:30:57.775457247 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0127 14:30:57.785162 1297348 kubeconfig.go:125] found "multinode-974434" server: "https://192.168.67.2:8443"
	I0127 14:30:57.785199 1297348 api_server.go:166] Checking apiserver status ...
	I0127 14:30:57.785250 1297348 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:30:57.797213 1297348 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1384/cgroup
	I0127 14:30:57.807140 1297348 api_server.go:182] apiserver freezer: "12:freezer:/docker/c5e061c99bcade68de02dca0d088cb5f584df82867c6b45d2cba2d6dd33a559b/crio/crio-4b3b2c7bb6f29aa724ceaf63ef5aa3a0fabe41a792aa5d0fe30ab10e9e3c5c5e"
	I0127 14:30:57.807257 1297348 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/c5e061c99bcade68de02dca0d088cb5f584df82867c6b45d2cba2d6dd33a559b/crio/crio-4b3b2c7bb6f29aa724ceaf63ef5aa3a0fabe41a792aa5d0fe30ab10e9e3c5c5e/freezer.state
	I0127 14:30:57.816138 1297348 api_server.go:204] freezer state: "THAWED"
	I0127 14:30:57.816175 1297348 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0127 14:30:57.824438 1297348 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0127 14:30:57.824470 1297348 status.go:463] multinode-974434 apiserver status = Running (err=<nil>)
	I0127 14:30:57.824481 1297348 status.go:176] multinode-974434 status: &{Name:multinode-974434 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 14:30:57.824499 1297348 status.go:174] checking status of multinode-974434-m02 ...
	I0127 14:30:57.824916 1297348 cli_runner.go:164] Run: docker container inspect multinode-974434-m02 --format={{.State.Status}}
	I0127 14:30:57.841929 1297348 status.go:371] multinode-974434-m02 host status = "Running" (err=<nil>)
	I0127 14:30:57.841975 1297348 host.go:66] Checking if "multinode-974434-m02" exists ...
	I0127 14:30:57.842297 1297348 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-974434-m02
	I0127 14:30:57.859674 1297348 host.go:66] Checking if "multinode-974434-m02" exists ...
	I0127 14:30:57.859999 1297348 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 14:30:57.860045 1297348 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-974434-m02
	I0127 14:30:57.877464 1297348 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34070 SSHKeyPath:/home/jenkins/minikube-integration/20325-1178062/.minikube/machines/multinode-974434-m02/id_rsa Username:docker}
	I0127 14:30:57.966492 1297348 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 14:30:57.978164 1297348 status.go:176] multinode-974434-m02 status: &{Name:multinode-974434-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0127 14:30:57.978202 1297348 status.go:174] checking status of multinode-974434-m03 ...
	I0127 14:30:57.978574 1297348 cli_runner.go:164] Run: docker container inspect multinode-974434-m03 --format={{.State.Status}}
	I0127 14:30:57.995778 1297348 status.go:371] multinode-974434-m03 host status = "Stopped" (err=<nil>)
	I0127 14:30:57.995803 1297348 status.go:384] host is not running, skipping remaining checks
	I0127 14:30:57.995811 1297348 status.go:176] multinode-974434-m03 status: &{Name:multinode-974434-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.23s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-974434 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-974434 node start m03 -v=7 --alsologtostderr: (9.208069917s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-974434 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.98s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (88.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-974434
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-974434
E0127 14:31:30.452606 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/functional-138053/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-974434: (24.753726821s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-974434 --wait=true -v=8 --alsologtostderr
E0127 14:32:26.355413 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/addons-790770/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-974434 --wait=true -v=8 --alsologtostderr: (1m3.882014754s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-974434
--- PASS: TestMultiNode/serial/RestartKeepsNodes (88.78s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-974434 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-974434 node delete m03: (4.620631446s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-974434 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.30s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-974434 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-974434 stop: (23.641894042s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-974434 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-974434 status: exit status 7 (95.589426ms)

                                                
                                                
-- stdout --
	multinode-974434
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-974434-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-974434 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-974434 status --alsologtostderr: exit status 7 (96.550235ms)

                                                
                                                
-- stdout --
	multinode-974434
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-974434-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 14:33:05.840118 1304783 out.go:345] Setting OutFile to fd 1 ...
	I0127 14:33:05.840345 1304783 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:33:05.840374 1304783 out.go:358] Setting ErrFile to fd 2...
	I0127 14:33:05.840395 1304783 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:33:05.840676 1304783 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20325-1178062/.minikube/bin
	I0127 14:33:05.840944 1304783 out.go:352] Setting JSON to false
	I0127 14:33:05.841034 1304783 mustload.go:65] Loading cluster: multinode-974434
	I0127 14:33:05.841084 1304783 notify.go:220] Checking for updates...
	I0127 14:33:05.841576 1304783 config.go:182] Loaded profile config "multinode-974434": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 14:33:05.841620 1304783 status.go:174] checking status of multinode-974434 ...
	I0127 14:33:05.842172 1304783 cli_runner.go:164] Run: docker container inspect multinode-974434 --format={{.State.Status}}
	I0127 14:33:05.862313 1304783 status.go:371] multinode-974434 host status = "Stopped" (err=<nil>)
	I0127 14:33:05.862338 1304783 status.go:384] host is not running, skipping remaining checks
	I0127 14:33:05.862346 1304783 status.go:176] multinode-974434 status: &{Name:multinode-974434 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 14:33:05.862384 1304783 status.go:174] checking status of multinode-974434-m02 ...
	I0127 14:33:05.862706 1304783 cli_runner.go:164] Run: docker container inspect multinode-974434-m02 --format={{.State.Status}}
	I0127 14:33:05.885795 1304783 status.go:371] multinode-974434-m02 host status = "Stopped" (err=<nil>)
	I0127 14:33:05.885829 1304783 status.go:384] host is not running, skipping remaining checks
	I0127 14:33:05.885836 1304783 status.go:176] multinode-974434-m02 status: &{Name:multinode-974434-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.83s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (55.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-974434 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-974434 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (54.399706688s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-974434 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (55.08s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (31.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-974434
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-974434-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-974434-m02 --driver=docker  --container-runtime=crio: exit status 14 (102.386492ms)

                                                
                                                
-- stdout --
	* [multinode-974434-m02] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20325
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20325-1178062/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20325-1178062/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-974434-m02' is duplicated with machine name 'multinode-974434-m02' in profile 'multinode-974434'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-974434-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-974434-m03 --driver=docker  --container-runtime=crio: (28.539360962s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-974434
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-974434: exit status 80 (322.324138ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-974434 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-974434-m03 already exists in multinode-974434-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-974434-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-974434-m03: (2.018639826s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (31.04s)

                                                
                                    
x
+
TestPreload (130.89s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-136715 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E0127 14:35:07.387470 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/functional-138053/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-136715 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m37.045661789s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-136715 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-136715 image pull gcr.io/k8s-minikube/busybox: (3.47272135s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-136715
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-136715: (5.774422689s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-136715 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-136715 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (21.868508268s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-136715 image list
helpers_test.go:175: Cleaning up "test-preload-136715" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-136715
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-136715: (2.42592121s)
--- PASS: TestPreload (130.89s)

                                                
                                    
x
+
TestScheduledStopUnix (105.66s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-172189 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-172189 --memory=2048 --driver=docker  --container-runtime=crio: (29.008718717s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-172189 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-172189 -n scheduled-stop-172189
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-172189 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0127 14:37:16.515342 1183449 retry.go:31] will retry after 93.808µs: open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/scheduled-stop-172189/pid: no such file or directory
I0127 14:37:16.516521 1183449 retry.go:31] will retry after 131.436µs: open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/scheduled-stop-172189/pid: no such file or directory
I0127 14:37:16.516946 1183449 retry.go:31] will retry after 188.045µs: open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/scheduled-stop-172189/pid: no such file or directory
I0127 14:37:16.517656 1183449 retry.go:31] will retry after 303.103µs: open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/scheduled-stop-172189/pid: no such file or directory
I0127 14:37:16.518786 1183449 retry.go:31] will retry after 366.667µs: open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/scheduled-stop-172189/pid: no such file or directory
I0127 14:37:16.519909 1183449 retry.go:31] will retry after 940.274µs: open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/scheduled-stop-172189/pid: no such file or directory
I0127 14:37:16.520954 1183449 retry.go:31] will retry after 914.501µs: open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/scheduled-stop-172189/pid: no such file or directory
I0127 14:37:16.522095 1183449 retry.go:31] will retry after 1.248719ms: open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/scheduled-stop-172189/pid: no such file or directory
I0127 14:37:16.524344 1183449 retry.go:31] will retry after 3.573179ms: open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/scheduled-stop-172189/pid: no such file or directory
I0127 14:37:16.528591 1183449 retry.go:31] will retry after 4.823862ms: open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/scheduled-stop-172189/pid: no such file or directory
I0127 14:37:16.533825 1183449 retry.go:31] will retry after 4.435398ms: open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/scheduled-stop-172189/pid: no such file or directory
I0127 14:37:16.539055 1183449 retry.go:31] will retry after 5.012644ms: open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/scheduled-stop-172189/pid: no such file or directory
I0127 14:37:16.544220 1183449 retry.go:31] will retry after 12.969324ms: open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/scheduled-stop-172189/pid: no such file or directory
I0127 14:37:16.557577 1183449 retry.go:31] will retry after 12.459486ms: open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/scheduled-stop-172189/pid: no such file or directory
I0127 14:37:16.577097 1183449 retry.go:31] will retry after 20.592844ms: open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/scheduled-stop-172189/pid: no such file or directory
I0127 14:37:16.598300 1183449 retry.go:31] will retry after 57.22065ms: open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/scheduled-stop-172189/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-172189 --cancel-scheduled
E0127 14:37:26.358900 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/addons-790770/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-172189 -n scheduled-stop-172189
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-172189
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-172189 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-172189
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-172189: exit status 7 (71.838996ms)

                                                
                                                
-- stdout --
	scheduled-stop-172189
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-172189 -n scheduled-stop-172189
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-172189 -n scheduled-stop-172189: exit status 7 (70.824575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-172189" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-172189
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-172189: (5.005195172s)
--- PASS: TestScheduledStopUnix (105.66s)

                                                
                                    
x
+
TestInsufficientStorage (10.25s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-508390 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-508390 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.787030403s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"396c867b-6fce-42f6-8ff6-37af51155c9e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-508390] minikube v1.35.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"99757962-c666-4691-aec6-0e9a5de3a408","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20325"}}
	{"specversion":"1.0","id":"4361deea-781d-47ec-b7dc-86d842d5099b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"9512cec1-c715-4396-ae03-fae69b709c87","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20325-1178062/kubeconfig"}}
	{"specversion":"1.0","id":"dd45b149-179b-4f34-8530-782d0b378316","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20325-1178062/.minikube"}}
	{"specversion":"1.0","id":"1a8b538b-2653-4cb2-9ee2-e420a5bc75c2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"e00f3cf4-e638-49f1-bf19-e790a1ed660f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"93d622f7-dcf7-4935-9513-71441ceb791d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"5659862b-5766-444b-9093-012ab36580fb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"ec308fa4-0b8c-42c1-af8d-a6911ad6b2e3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"ee4b639c-e3fe-4bfa-9f1e-8db45528d029","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"c789d674-855c-4ce5-822a-cf69f35ebfd9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-508390\" primary control-plane node in \"insufficient-storage-508390\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"62579019-8e13-4230-9e8b-3d0d49d06cdf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.46 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"2adb1e04-31be-49f5-b9db-3ce9b7f79042","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"52ef70f1-a5ba-4e95-bad1-1829879e2626","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-508390 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-508390 --output=json --layout=cluster: exit status 7 (274.861872ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-508390","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-508390","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0127 14:38:40.707267 1322235 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-508390" does not appear in /home/jenkins/minikube-integration/20325-1178062/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-508390 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-508390 --output=json --layout=cluster: exit status 7 (283.658276ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-508390","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-508390","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0127 14:38:40.989978 1322297 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-508390" does not appear in /home/jenkins/minikube-integration/20325-1178062/kubeconfig
	E0127 14:38:41.000251 1322297 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/insufficient-storage-508390/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-508390" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-508390
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-508390: (1.907957249s)
--- PASS: TestInsufficientStorage (10.25s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (65.49s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.735272064 start -p running-upgrade-579663 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.735272064 start -p running-upgrade-579663 --memory=2200 --vm-driver=docker  --container-runtime=crio: (35.663382494s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-579663 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-579663 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (25.958348101s)
helpers_test.go:175: Cleaning up "running-upgrade-579663" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-579663
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-579663: (3.084466935s)
--- PASS: TestRunningBinaryUpgrade (65.49s)

                                                
                                    
x
+
TestKubernetesUpgrade (385.89s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-782765 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-782765 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m9.972673522s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-782765
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-782765: (1.33168747s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-782765 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-782765 status --format={{.Host}}: exit status 7 (199.572707ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-782765 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-782765 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m43.668720013s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-782765 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-782765 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-782765 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (113.189834ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-782765] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20325
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20325-1178062/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20325-1178062/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-782765
	    minikube start -p kubernetes-upgrade-782765 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7827652 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.1, by running:
	    
	    minikube start -p kubernetes-upgrade-782765 --kubernetes-version=v1.32.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-782765 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-782765 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (27.807397957s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-782765" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-782765
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-782765: (2.63913232s)
--- PASS: TestKubernetesUpgrade (385.89s)

                                                
                                    
x
+
TestMissingContainerUpgrade (167.94s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.682163659 start -p missing-upgrade-922860 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.682163659 start -p missing-upgrade-922860 --memory=2200 --driver=docker  --container-runtime=crio: (1m26.473813061s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-922860
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-922860: (13.671687643s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-922860
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-922860 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0127 14:40:29.436799 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/addons-790770/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-922860 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m4.849176531s)
helpers_test.go:175: Cleaning up "missing-upgrade-922860" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-922860
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-922860: (2.138175664s)
--- PASS: TestMissingContainerUpgrade (167.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-273090 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-273090 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (95.338176ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-273090] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20325
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20325-1178062/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20325-1178062/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (37.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-273090 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-273090 --driver=docker  --container-runtime=crio: (37.508566465s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-273090 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (37.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (29.98s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-273090 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-273090 --no-kubernetes --driver=docker  --container-runtime=crio: (27.387045285s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-273090 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-273090 status -o json: exit status 2 (446.401175ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-273090","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-273090
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-273090: (2.145272364s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (29.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (10.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-273090 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-273090 --no-kubernetes --driver=docker  --container-runtime=crio: (10.174604679s)
--- PASS: TestNoKubernetes/serial/Start (10.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-273090 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-273090 "sudo systemctl is-active --quiet service kubelet": exit status 1 (460.912983ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.46s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (4.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-arm64 profile list: (3.937639686s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (4.46s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-273090
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-273090: (1.24759159s)
--- PASS: TestNoKubernetes/serial/Stop (1.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-273090 --driver=docker  --container-runtime=crio
E0127 14:40:07.386940 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/functional-138053/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-273090 --driver=docker  --container-runtime=crio: (7.095420578s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-273090 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-273090 "sudo systemctl is-active --quiet service kubelet": exit status 1 (256.839479ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.8s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.80s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (72.44s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2574924353 start -p stopped-upgrade-063275 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2574924353 start -p stopped-upgrade-063275 --memory=2200 --vm-driver=docker  --container-runtime=crio: (40.980565025s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2574924353 -p stopped-upgrade-063275 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2574924353 -p stopped-upgrade-063275 stop: (2.588361172s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-063275 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0127 14:42:26.355200 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/addons-790770/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-063275 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (28.871373543s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (72.44s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.3s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-063275
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-063275: (2.299816057s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.30s)

                                                
                                    
x
+
TestPause/serial/Start (81.49s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-907663 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E0127 14:45:07.387406 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/functional-138053/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-907663 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m21.487754561s)
--- PASS: TestPause/serial/Start (81.49s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (29.39s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-907663 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-907663 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (29.367858994s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (29.39s)

                                                
                                    
x
+
TestPause/serial/Pause (0.86s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-907663 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.86s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.42s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-907663 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-907663 --output=json --layout=cluster: exit status 2 (423.015456ms)

                                                
                                                
-- stdout --
	{"Name":"pause-907663","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-907663","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.42s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.84s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-907663 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.84s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.85s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-907663 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.85s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.85s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-907663 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-907663 --alsologtostderr -v=5: (2.844983581s)
--- PASS: TestPause/serial/DeletePaused (2.85s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.38s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-907663
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-907663: exit status 1 (17.967668ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-907663: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-427215 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-427215 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (258.364144ms)

                                                
                                                
-- stdout --
	* [false-427215] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20325
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20325-1178062/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20325-1178062/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 14:46:33.985355 1362439 out.go:345] Setting OutFile to fd 1 ...
	I0127 14:46:33.985596 1362439 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:46:33.985626 1362439 out.go:358] Setting ErrFile to fd 2...
	I0127 14:46:33.985645 1362439 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:46:33.985945 1362439 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20325-1178062/.minikube/bin
	I0127 14:46:33.986420 1362439 out.go:352] Setting JSON to false
	I0127 14:46:33.987545 1362439 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":16145,"bootTime":1737973049,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0127 14:46:33.987641 1362439 start.go:139] virtualization:  
	I0127 14:46:33.991435 1362439 out.go:177] * [false-427215] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0127 14:46:33.995301 1362439 out.go:177]   - MINIKUBE_LOCATION=20325
	I0127 14:46:33.995379 1362439 notify.go:220] Checking for updates...
	I0127 14:46:34.007922 1362439 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 14:46:34.011435 1362439 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20325-1178062/kubeconfig
	I0127 14:46:34.014418 1362439 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20325-1178062/.minikube
	I0127 14:46:34.017353 1362439 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0127 14:46:34.020323 1362439 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 14:46:34.023998 1362439 config.go:182] Loaded profile config "kubernetes-upgrade-782765": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 14:46:34.024182 1362439 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 14:46:34.060055 1362439 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0127 14:46:34.060178 1362439 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 14:46:34.155671 1362439 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:34 OomKillDisable:true NGoroutines:54 SystemTime:2025-01-27 14:46:34.144593673 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0127 14:46:34.155787 1362439 docker.go:318] overlay module found
	I0127 14:46:34.158947 1362439 out.go:177] * Using the docker driver based on user configuration
	I0127 14:46:34.161816 1362439 start.go:297] selected driver: docker
	I0127 14:46:34.161839 1362439 start.go:901] validating driver "docker" against <nil>
	I0127 14:46:34.161855 1362439 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 14:46:34.165465 1362439 out.go:201] 
	W0127 14:46:34.168402 1362439 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0127 14:46:34.171383 1362439 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-427215 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-427215

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-427215

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-427215

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-427215

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-427215

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-427215

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-427215

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-427215

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-427215

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-427215

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-427215"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-427215"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-427215"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-427215

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-427215"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-427215"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-427215" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-427215" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-427215" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-427215" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-427215" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-427215" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-427215" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-427215" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-427215"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-427215"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-427215"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-427215"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-427215"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-427215" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-427215" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-427215" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-427215"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-427215"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-427215"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-427215"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-427215"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20325-1178062/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 27 Jan 2025 14:46:19 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-782765
contexts:
- context:
cluster: kubernetes-upgrade-782765
extensions:
- extension:
last-update: Mon, 27 Jan 2025 14:46:19 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: kubernetes-upgrade-782765
name: kubernetes-upgrade-782765
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-782765
user:
client-certificate: /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/kubernetes-upgrade-782765/client.crt
client-key: /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/kubernetes-upgrade-782765/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-427215

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-427215"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-427215"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-427215"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-427215"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-427215"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-427215"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-427215"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-427215"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-427215"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-427215"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-427215"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-427215"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-427215"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-427215"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-427215"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-427215"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-427215"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-427215"

                                                
                                                
----------------------- debugLogs end: false-427215 [took: 4.295707312s] --------------------------------
helpers_test.go:175: Cleaning up "false-427215" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-427215
--- PASS: TestNetworkPlugins/group/false (4.82s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (180.61s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-414237 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
E0127 14:48:10.454198 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/functional-138053/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:50:07.387474 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/functional-138053/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-414237 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (3m0.612811788s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (180.61s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (64.61s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-132969 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-132969 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1: (1m4.610569353s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (64.61s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.75s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-414237 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d7885686-d2e1-41f4-8758-28cb1e547244] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d7885686-d2e1-41f4-8758-28cb1e547244] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.006023858s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-414237 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.75s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.82s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-414237 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-414237 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.514593068s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-414237 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.82s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-414237 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-414237 --alsologtostderr -v=3: (12.341269879s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-414237 -n old-k8s-version-414237
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-414237 -n old-k8s-version-414237: exit status 7 (142.272036ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-414237 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (374.67s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-414237 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-414237 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (6m14.214645105s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-414237 -n old-k8s-version-414237
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (374.67s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-132969 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [dbd8d9a8-ed0b-45cf-bae2-41982dc5cfce] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [dbd8d9a8-ed0b-45cf-bae2-41982dc5cfce] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004579831s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-132969 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-132969 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-132969 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.095667712s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-132969 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-132969 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-132969 --alsologtostderr -v=3: (12.35845067s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-132969 -n no-preload-132969
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-132969 -n no-preload-132969: exit status 7 (73.185405ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-132969 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (300.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-132969 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1
E0127 14:52:26.354851 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/addons-790770/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:55:07.387654 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/functional-138053/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:57:09.438934 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/addons-790770/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-132969 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1: (4m59.658089375s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-132969 -n no-preload-132969
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (300.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-nwf2n" [d9a65628-6f1b-4e42-9964-97a91052eebe] Running
E0127 14:57:26.355012 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/addons-790770/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003668131s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-nwf2n" [d9a65628-6f1b-4e42-9964-97a91052eebe] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003993661s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-132969 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-132969 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-132969 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-132969 -n no-preload-132969
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-132969 -n no-preload-132969: exit status 2 (324.884999ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-132969 -n no-preload-132969
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-132969 -n no-preload-132969: exit status 2 (331.954554ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-132969 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-132969 -n no-preload-132969
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-132969 -n no-preload-132969
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (59.77s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-568483 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-568483 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1: (59.766159161s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (59.77s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-9jf7t" [0d3c3642-ad72-45e8-9c77-29739e0f526e] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003895196s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-9jf7t" [0d3c3642-ad72-45e8-9c77-29739e0f526e] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004598786s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-414237 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-414237 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.81s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-414237 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p old-k8s-version-414237 --alsologtostderr -v=1: (1.065891122s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-414237 -n old-k8s-version-414237
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-414237 -n old-k8s-version-414237: exit status 2 (417.252287ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-414237 -n old-k8s-version-414237
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-414237 -n old-k8s-version-414237: exit status 2 (416.545698ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-414237 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 unpause -p old-k8s-version-414237 --alsologtostderr -v=1: (1.086591717s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-414237 -n old-k8s-version-414237
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-414237 -n old-k8s-version-414237
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.81s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (81.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-213275 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-213275 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1: (1m21.174526452s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (81.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-568483 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a58355ce-8c25-41c5-accc-e3385cd04243] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [a58355ce-8c25-41c5-accc-e3385cd04243] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.003489363s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-568483 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-568483 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-568483 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.008531254s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-568483 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.96s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-568483 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-568483 --alsologtostderr -v=3: (11.955483248s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.96s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-568483 -n embed-certs-568483
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-568483 -n embed-certs-568483: exit status 7 (74.252488ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-568483 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (266.61s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-568483 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-568483 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1: (4m26.266203698s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-568483 -n embed-certs-568483
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (266.61s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-213275 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [fd1e7bb0-d84a-4f23-a9f0-4000dd3b1395] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [fd1e7bb0-d84a-4f23-a9f0-4000dd3b1395] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.00421365s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-213275 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.44s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-213275 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-213275 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.28108922s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-213275 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-213275 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-213275 --alsologtostderr -v=3: (12.042645119s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-213275 -n default-k8s-diff-port-213275
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-213275 -n default-k8s-diff-port-213275: exit status 7 (71.635164ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-213275 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (281.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-213275 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1
E0127 15:00:07.386821 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/functional-138053/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:01:02.769478 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/old-k8s-version-414237/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:01:02.776208 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/old-k8s-version-414237/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:01:02.787839 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/old-k8s-version-414237/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:01:02.809180 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/old-k8s-version-414237/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:01:02.850789 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/old-k8s-version-414237/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:01:02.932268 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/old-k8s-version-414237/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:01:03.093664 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/old-k8s-version-414237/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:01:03.415078 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/old-k8s-version-414237/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:01:04.056463 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/old-k8s-version-414237/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:01:05.337710 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/old-k8s-version-414237/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:01:07.899330 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/old-k8s-version-414237/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:01:13.020905 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/old-k8s-version-414237/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:01:23.262244 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/old-k8s-version-414237/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:01:43.744042 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/old-k8s-version-414237/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:01:58.451948 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/no-preload-132969/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:01:58.458311 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/no-preload-132969/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:01:58.469689 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/no-preload-132969/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:01:58.491080 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/no-preload-132969/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:01:58.532533 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/no-preload-132969/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:01:58.614073 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/no-preload-132969/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:01:58.775590 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/no-preload-132969/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:01:59.097460 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/no-preload-132969/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:01:59.739320 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/no-preload-132969/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:02:01.021069 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/no-preload-132969/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:02:03.582976 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/no-preload-132969/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:02:08.705231 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/no-preload-132969/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:02:18.947519 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/no-preload-132969/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:02:24.706454 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/old-k8s-version-414237/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:02:26.355260 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/addons-790770/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:02:39.429297 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/no-preload-132969/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:03:20.391396 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/no-preload-132969/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-213275 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1: (4m41.496301488s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-213275 -n default-k8s-diff-port-213275
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (281.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-zcsl5" [3a5fd224-6f56-4a19-93a3-2572cf89b8b0] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004363463s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-zcsl5" [3a5fd224-6f56-4a19-93a3-2572cf89b8b0] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005171392s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-568483 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-568483 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-568483 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-568483 -n embed-certs-568483
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-568483 -n embed-certs-568483: exit status 2 (311.58024ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-568483 -n embed-certs-568483
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-568483 -n embed-certs-568483: exit status 2 (325.896849ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-568483 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-568483 -n embed-certs-568483
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-568483 -n embed-certs-568483
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.15s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (38.6s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-499991 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-499991 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1: (38.601550173s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (38.60s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-499991 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-499991 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.088674281s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-499991 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-499991 --alsologtostderr -v=3: (1.230384359s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-l59ql" [288f914d-f354-4106-acfd-520844eb164f] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.006258678s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-499991 -n newest-cni-499991
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-499991 -n newest-cni-499991: exit status 7 (70.400653ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-499991 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (20.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-499991 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-499991 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1: (19.753560155s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-499991 -n newest-cni-499991
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (20.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-l59ql" [288f914d-f354-4106-acfd-520844eb164f] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003790744s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-213275 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-213275 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.66s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-213275 --alsologtostderr -v=1
E0127 15:04:42.313360 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/no-preload-132969/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-213275 --alsologtostderr -v=1: (1.210116373s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-213275 -n default-k8s-diff-port-213275
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-213275 -n default-k8s-diff-port-213275: exit status 2 (428.918192ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-213275 -n default-k8s-diff-port-213275
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-213275 -n default-k8s-diff-port-213275: exit status 2 (462.028457ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-213275 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 unpause -p default-k8s-diff-port-213275 --alsologtostderr -v=1: (1.160314819s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-213275 -n default-k8s-diff-port-213275
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-213275 -n default-k8s-diff-port-213275
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.66s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.44s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-499991 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (89.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-427215 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-427215 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m29.800503725s)
--- PASS: TestNetworkPlugins/group/auto/Start (89.80s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (4.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-499991 --alsologtostderr -v=1
E0127 15:04:50.455639 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/functional-138053/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p newest-cni-499991 --alsologtostderr -v=1: (1.131484215s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-499991 -n newest-cni-499991
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-499991 -n newest-cni-499991: exit status 2 (448.42458ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-499991 -n newest-cni-499991
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-499991 -n newest-cni-499991: exit status 2 (382.388132ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-499991 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-499991 -n newest-cni-499991
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-499991 -n newest-cni-499991
--- PASS: TestStartStop/group/newest-cni/serial/Pause (4.39s)
E0127 15:11:02.769435 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/old-k8s-version-414237/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:11:20.504205 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/auto-427215/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:11:20.510582 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/auto-427215/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:11:20.524727 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/auto-427215/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:11:20.546198 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/auto-427215/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:11:20.588610 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/auto-427215/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:11:20.670622 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/auto-427215/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:11:20.832122 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/auto-427215/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:11:21.153368 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/auto-427215/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:11:21.795439 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/auto-427215/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:11:23.077285 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/auto-427215/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:11:24.075037 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/kindnet-427215/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:11:24.081492 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/kindnet-427215/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:11:24.092901 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/kindnet-427215/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:11:24.114309 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/kindnet-427215/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:11:24.155742 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/kindnet-427215/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:11:24.237207 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/kindnet-427215/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:11:24.398738 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/kindnet-427215/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:11:24.720491 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/kindnet-427215/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:11:25.362689 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/kindnet-427215/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:11:25.639439 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/auto-427215/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:11:26.644832 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/kindnet-427215/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (85.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-427215 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E0127 15:05:07.387401 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/functional-138053/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:06:02.769165 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/old-k8s-version-414237/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-427215 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m25.873473978s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (85.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-427215 "pgrep -a kubelet"
I0127 15:06:20.235619 1183449 config.go:182] Loaded profile config "auto-427215": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-427215 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-ktx77" [a1be3b2e-c0ee-40ba-b283-e2ca0aff2f14] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-ktx77" [a1be3b2e-c0ee-40ba-b283-e2ca0aff2f14] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004481271s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-q2svm" [fc399a57-83e8-4ace-9409-3aa3e71a67d5] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005249917s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-427215 "pgrep -a kubelet"
I0127 15:06:30.390886 1183449 config.go:182] Loaded profile config "kindnet-427215": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-427215 replace --force -f testdata/netcat-deployment.yaml
E0127 15:06:30.470009 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/old-k8s-version-414237/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-z96cf" [ee146162-bca7-43d9-8053-108a29314733] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-z96cf" [ee146162-bca7-43d9-8053-108a29314733] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.005855312s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-427215 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-427215 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-427215 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-427215 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-427215 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-427215 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (71.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-427215 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E0127 15:06:58.451990 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/no-preload-132969/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-427215 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m11.361999868s)
--- PASS: TestNetworkPlugins/group/calico/Start (71.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (70.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-427215 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E0127 15:07:26.155316 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/no-preload-132969/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:07:26.354830 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/addons-790770/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-427215 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m10.303907706s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (70.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-9wfg5" [ea59928f-2705-49eb-9e82-988028db8785] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004506692s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-427215 "pgrep -a kubelet"
I0127 15:08:12.117171 1183449 config.go:182] Loaded profile config "calico-427215": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-427215 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-qkd2j" [63b357c1-d1fc-41d4-adb8-eb3a64318f89] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-qkd2j" [63b357c1-d1fc-41d4-adb8-eb3a64318f89] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.004107811s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-427215 "pgrep -a kubelet"
I0127 15:08:16.938432 1183449 config.go:182] Loaded profile config "custom-flannel-427215": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-427215 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-9xx8r" [d6a24af0-1057-48f0-b916-b8621db5b907] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-9xx8r" [d6a24af0-1057-48f0-b916-b8621db5b907] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004110173s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-427215 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-427215 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-427215 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-427215 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-427215 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-427215 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (77.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-427215 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-427215 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m17.228331597s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (77.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (66.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-427215 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E0127 15:09:23.456001 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/default-k8s-diff-port-213275/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:09:23.462345 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/default-k8s-diff-port-213275/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:09:23.473678 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/default-k8s-diff-port-213275/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:09:23.495012 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/default-k8s-diff-port-213275/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:09:23.536382 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/default-k8s-diff-port-213275/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:09:23.617759 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/default-k8s-diff-port-213275/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:09:23.779199 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/default-k8s-diff-port-213275/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:09:24.100914 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/default-k8s-diff-port-213275/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:09:24.742872 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/default-k8s-diff-port-213275/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:09:26.024143 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/default-k8s-diff-port-213275/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:09:28.585394 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/default-k8s-diff-port-213275/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:09:33.706658 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/default-k8s-diff-port-213275/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:09:43.947982 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/default-k8s-diff-port-213275/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-427215 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m6.091360744s)
--- PASS: TestNetworkPlugins/group/flannel/Start (66.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-5mlvw" [02e12f7a-36ff-4e42-804c-a80ea61b66a2] Running
E0127 15:10:04.429835 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/default-k8s-diff-port-213275/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00387521s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-427215 "pgrep -a kubelet"
I0127 15:10:07.138254 1183449 config.go:182] Loaded profile config "flannel-427215": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (15.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-427215 replace --force -f testdata/netcat-deployment.yaml
E0127 15:10:07.387784 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/functional-138053/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-zzfkm" [dc38a913-8384-424c-abf6-bf10242d52fb] Pending
helpers_test.go:344: "netcat-5d86dc444-zzfkm" [dc38a913-8384-424c-abf6-bf10242d52fb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-zzfkm" [dc38a913-8384-424c-abf6-bf10242d52fb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 15.004524949s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (15.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-427215 "pgrep -a kubelet"
I0127 15:10:08.190102 1183449 config.go:182] Loaded profile config "enable-default-cni-427215": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-427215 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-t6fcx" [d7431fcb-26dc-4f77-a332-7a13d297831d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-t6fcx" [d7431fcb-26dc-4f77-a332-7a13d297831d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.004155799s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-427215 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-427215 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-427215 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-427215 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-427215 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-427215 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (42.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-427215 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E0127 15:10:45.391553 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/default-k8s-diff-port-213275/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-427215 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (42.92980046s)
--- PASS: TestNetworkPlugins/group/bridge/Start (42.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-427215 "pgrep -a kubelet"
I0127 15:11:28.621402 1183449 config.go:182] Loaded profile config "bridge-427215": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-427215 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-46hng" [0b1c7975-927c-4cf9-8324-4866a078a177] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0127 15:11:29.206709 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/kindnet-427215/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:11:30.761181 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/auto-427215/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-46hng" [0b1c7975-927c-4cf9-8324-4866a078a177] Running
E0127 15:11:34.328307 1183449 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/kindnet-427215/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003725016s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-427215 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-427215 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-427215 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    

Test skip (31/330)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.32.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.6s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-660192 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-660192" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-660192
--- SKIP: TestDownloadOnlyKic (0.60s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.34s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:789: skipping: crio not supported
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-790770 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.34s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-735173" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-735173
--- SKIP: TestStartStop/group/disable-driver-mounts (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-427215 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-427215

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-427215

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-427215

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-427215

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-427215

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-427215

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-427215

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-427215

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-427215

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-427215

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-427215"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-427215"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-427215"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-427215

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-427215"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-427215"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-427215" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-427215" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-427215" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-427215" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-427215" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-427215" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-427215" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-427215" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-427215"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-427215"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-427215"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-427215"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-427215"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-427215" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-427215" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-427215" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-427215"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-427215"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-427215"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-427215"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-427215"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20325-1178062/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 27 Jan 2025 14:46:19 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-782765
contexts:
- context:
cluster: kubernetes-upgrade-782765
extensions:
- extension:
last-update: Mon, 27 Jan 2025 14:46:19 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: kubernetes-upgrade-782765
name: kubernetes-upgrade-782765
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-782765
user:
client-certificate: /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/kubernetes-upgrade-782765/client.crt
client-key: /home/jenkins/minikube-integration/20325-1178062/.minikube/profiles/kubernetes-upgrade-782765/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-427215

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-427215"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-427215"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-427215"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-427215"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-427215"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-427215"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-427215"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-427215"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-427215"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-427215"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-427215"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-427215"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-427215"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-427215"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-427215"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-427215"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-427215"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-427215"

                                                
                                                
----------------------- debugLogs end: kubenet-427215 [took: 3.736971267s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-427215" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-427215
--- SKIP: TestNetworkPlugins/group/kubenet (3.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-427215 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-427215

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-427215

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-427215

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-427215

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-427215

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-427215

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-427215

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-427215

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-427215

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-427215

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-427215"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-427215"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-427215"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-427215

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-427215"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-427215"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-427215" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-427215" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-427215" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-427215" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-427215" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-427215" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-427215" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-427215" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-427215"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-427215"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-427215"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-427215"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-427215"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-427215

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-427215

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-427215" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-427215" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-427215

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-427215

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-427215" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-427215" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-427215" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-427215" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-427215" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-427215"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-427215"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-427215"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-427215"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-427215"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-427215

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-427215"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-427215"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-427215"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-427215"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-427215"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-427215"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-427215"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-427215"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-427215"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-427215"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-427215"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-427215"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-427215"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-427215"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-427215"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-427215"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-427215"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-427215" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-427215"

                                                
                                                
----------------------- debugLogs end: cilium-427215 [took: 5.79509605s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-427215" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-427215
--- SKIP: TestNetworkPlugins/group/cilium (5.98s)

                                                
                                    
Copied to clipboard