Test Report: Docker_Linux_crio_arm64 20451

                    
                      3de5109224746595ef816ce07f095d1725de7bd9:2025-02-24:38483
                    
                

Test fail (1/331)

Order failed test Duration
36 TestAddons/parallel/Ingress 155.09
x
+
TestAddons/parallel/Ingress (155.09s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-961822 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-961822 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-961822 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [435bd033-977a-424d-a92f-5d65631877e0] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [435bd033-977a-424d-a92f-5d65631877e0] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003979236s
I0224 12:38:40.197511  573823 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-961822 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-961822 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.328551025s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-961822 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-961822 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-961822
helpers_test.go:235: (dbg) docker inspect addons-961822:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8cb3793e684c56c1229c9a3e36d687f5c7dde5c0149427f79383f91d35f59b11",
	        "Created": "2025-02-24T12:35:03.458330139Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 574976,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-02-24T12:35:03.530987949Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:97f64c6c1710fa51774ed1bcabfea9e0981a3c815376cca47782248110390c98",
	        "ResolvConfPath": "/var/lib/docker/containers/8cb3793e684c56c1229c9a3e36d687f5c7dde5c0149427f79383f91d35f59b11/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8cb3793e684c56c1229c9a3e36d687f5c7dde5c0149427f79383f91d35f59b11/hostname",
	        "HostsPath": "/var/lib/docker/containers/8cb3793e684c56c1229c9a3e36d687f5c7dde5c0149427f79383f91d35f59b11/hosts",
	        "LogPath": "/var/lib/docker/containers/8cb3793e684c56c1229c9a3e36d687f5c7dde5c0149427f79383f91d35f59b11/8cb3793e684c56c1229c9a3e36d687f5c7dde5c0149427f79383f91d35f59b11-json.log",
	        "Name": "/addons-961822",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-961822:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-961822",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8cb3793e684c56c1229c9a3e36d687f5c7dde5c0149427f79383f91d35f59b11",
	                "LowerDir": "/var/lib/docker/overlay2/df10a4602fedfd07522d1be2b0f5283f1897710234b14b784f866a10eb45ed07-init/diff:/var/lib/docker/overlay2/275b11281f6019c644700d1bbb18fd42a48a9a1e92850c1fdfdfd21e77ed083e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/df10a4602fedfd07522d1be2b0f5283f1897710234b14b784f866a10eb45ed07/merged",
	                "UpperDir": "/var/lib/docker/overlay2/df10a4602fedfd07522d1be2b0f5283f1897710234b14b784f866a10eb45ed07/diff",
	                "WorkDir": "/var/lib/docker/overlay2/df10a4602fedfd07522d1be2b0f5283f1897710234b14b784f866a10eb45ed07/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-961822",
	                "Source": "/var/lib/docker/volumes/addons-961822/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-961822",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-961822",
	                "name.minikube.sigs.k8s.io": "addons-961822",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b45bdc4e84855b8841e1f0253ed6881ce26165d393232eb74e15a86390ddfc50",
	            "SandboxKey": "/var/run/docker/netns/b45bdc4e8485",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33506"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33507"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33510"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33508"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33509"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-961822": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "06:01:66:f9:e8:1e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c6ef359d673296a8aff1c9f09a0c8114255a948f4aa767c57c04a1a44ba50c43",
	                    "EndpointID": "aa01a4fa1f46d5662edcacc62deaa415fa590287c2060b5fac8cc7921161e42a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-961822",
	                        "8cb3793e684c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-961822 -n addons-961822
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-961822 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-961822 logs -n 25: (1.697026868s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-961008                                                                     | download-only-961008   | jenkins | v1.35.0 | 24 Feb 25 12:34 UTC | 24 Feb 25 12:34 UTC |
	| start   | --download-only -p                                                                          | download-docker-157710 | jenkins | v1.35.0 | 24 Feb 25 12:34 UTC |                     |
	|         | download-docker-157710                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-157710                                                                   | download-docker-157710 | jenkins | v1.35.0 | 24 Feb 25 12:34 UTC | 24 Feb 25 12:34 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-526164   | jenkins | v1.35.0 | 24 Feb 25 12:34 UTC |                     |
	|         | binary-mirror-526164                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:45873                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-526164                                                                     | binary-mirror-526164   | jenkins | v1.35.0 | 24 Feb 25 12:34 UTC | 24 Feb 25 12:34 UTC |
	| addons  | disable dashboard -p                                                                        | addons-961822          | jenkins | v1.35.0 | 24 Feb 25 12:34 UTC |                     |
	|         | addons-961822                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-961822          | jenkins | v1.35.0 | 24 Feb 25 12:34 UTC |                     |
	|         | addons-961822                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-961822 --wait=true                                                                | addons-961822          | jenkins | v1.35.0 | 24 Feb 25 12:34 UTC | 24 Feb 25 12:37 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	| addons  | addons-961822 addons disable                                                                | addons-961822          | jenkins | v1.35.0 | 24 Feb 25 12:37 UTC | 24 Feb 25 12:37 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-961822 addons disable                                                                | addons-961822          | jenkins | v1.35.0 | 24 Feb 25 12:37 UTC | 24 Feb 25 12:37 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-961822          | jenkins | v1.35.0 | 24 Feb 25 12:37 UTC | 24 Feb 25 12:37 UTC |
	|         | -p addons-961822                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-961822 addons disable                                                                | addons-961822          | jenkins | v1.35.0 | 24 Feb 25 12:38 UTC | 24 Feb 25 12:38 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ip      | addons-961822 ip                                                                            | addons-961822          | jenkins | v1.35.0 | 24 Feb 25 12:38 UTC | 24 Feb 25 12:38 UTC |
	| addons  | addons-961822 addons disable                                                                | addons-961822          | jenkins | v1.35.0 | 24 Feb 25 12:38 UTC | 24 Feb 25 12:38 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-961822 addons                                                                        | addons-961822          | jenkins | v1.35.0 | 24 Feb 25 12:38 UTC | 24 Feb 25 12:38 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-961822 addons                                                                        | addons-961822          | jenkins | v1.35.0 | 24 Feb 25 12:38 UTC | 24 Feb 25 12:38 UTC |
	|         | disable inspektor-gadget                                                                    |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-961822 ssh curl -s                                                                   | addons-961822          | jenkins | v1.35.0 | 24 Feb 25 12:38 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| addons  | addons-961822 addons                                                                        | addons-961822          | jenkins | v1.35.0 | 24 Feb 25 12:38 UTC | 24 Feb 25 12:38 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-961822 addons                                                                        | addons-961822          | jenkins | v1.35.0 | 24 Feb 25 12:38 UTC | 24 Feb 25 12:38 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-961822 addons                                                                        | addons-961822          | jenkins | v1.35.0 | 24 Feb 25 12:39 UTC | 24 Feb 25 12:39 UTC |
	|         | disable nvidia-device-plugin                                                                |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-961822 addons disable                                                                | addons-961822          | jenkins | v1.35.0 | 24 Feb 25 12:39 UTC | 24 Feb 25 12:39 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| ssh     | addons-961822 ssh cat                                                                       | addons-961822          | jenkins | v1.35.0 | 24 Feb 25 12:39 UTC | 24 Feb 25 12:39 UTC |
	|         | /opt/local-path-provisioner/pvc-6aadecd2-05d5-42b7-a214-c452c036cc3a_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-961822 addons disable                                                                | addons-961822          | jenkins | v1.35.0 | 24 Feb 25 12:39 UTC | 24 Feb 25 12:40 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-961822 addons                                                                        | addons-961822          | jenkins | v1.35.0 | 24 Feb 25 12:40 UTC | 24 Feb 25 12:40 UTC |
	|         | disable cloud-spanner                                                                       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-961822 ip                                                                            | addons-961822          | jenkins | v1.35.0 | 24 Feb 25 12:40 UTC | 24 Feb 25 12:40 UTC |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/24 12:34:37
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0224 12:34:37.715403  574586 out.go:345] Setting OutFile to fd 1 ...
	I0224 12:34:37.715631  574586 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0224 12:34:37.715664  574586 out.go:358] Setting ErrFile to fd 2...
	I0224 12:34:37.715689  574586 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0224 12:34:37.716027  574586 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20451-568444/.minikube/bin
	I0224 12:34:37.716650  574586 out.go:352] Setting JSON to false
	I0224 12:34:37.717818  574586 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":11826,"bootTime":1740388652,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1077-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0224 12:34:37.717946  574586 start.go:139] virtualization:  
	I0224 12:34:37.721720  574586 out.go:177] * [addons-961822] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0224 12:34:37.725634  574586 out.go:177]   - MINIKUBE_LOCATION=20451
	I0224 12:34:37.725674  574586 notify.go:220] Checking for updates...
	I0224 12:34:37.731739  574586 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0224 12:34:37.734756  574586 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20451-568444/kubeconfig
	I0224 12:34:37.737740  574586 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20451-568444/.minikube
	I0224 12:34:37.740637  574586 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0224 12:34:37.743611  574586 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0224 12:34:37.746625  574586 driver.go:394] Setting default libvirt URI to qemu:///system
	I0224 12:34:37.775346  574586 docker.go:123] docker version: linux-28.0.0:Docker Engine - Community
	I0224 12:34:37.775470  574586 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0224 12:34:37.833650  574586 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2025-02-24 12:34:37.824729364 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.21.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.33.0]] Warnings:<nil>}}
	I0224 12:34:37.833767  574586 docker.go:318] overlay module found
	I0224 12:34:37.836975  574586 out.go:177] * Using the docker driver based on user configuration
	I0224 12:34:37.839806  574586 start.go:297] selected driver: docker
	I0224 12:34:37.839831  574586 start.go:901] validating driver "docker" against <nil>
	I0224 12:34:37.839847  574586 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0224 12:34:37.840617  574586 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0224 12:34:37.901353  574586 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2025-02-24 12:34:37.891655985 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.21.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.33.0]] Warnings:<nil>}}
	I0224 12:34:37.901577  574586 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0224 12:34:37.901811  574586 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0224 12:34:37.904800  574586 out.go:177] * Using Docker driver with root privileges
	I0224 12:34:37.907689  574586 cni.go:84] Creating CNI manager for ""
	I0224 12:34:37.907773  574586 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0224 12:34:37.907788  574586 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0224 12:34:37.907908  574586 start.go:340] cluster config:
	{Name:addons-961822 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-961822 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0224 12:34:37.911138  574586 out.go:177] * Starting "addons-961822" primary control-plane node in "addons-961822" cluster
	I0224 12:34:37.914005  574586 cache.go:121] Beginning downloading kic base image for docker with crio
	I0224 12:34:37.916962  574586 out.go:177] * Pulling base image v0.0.46-1740046583-20436 ...
	I0224 12:34:37.919850  574586 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0224 12:34:37.919913  574586 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20451-568444/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-arm64.tar.lz4
	I0224 12:34:37.919923  574586 cache.go:56] Caching tarball of preloaded images
	I0224 12:34:37.919962  574586 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 in local docker daemon
	I0224 12:34:37.920052  574586 preload.go:172] Found /home/jenkins/minikube-integration/20451-568444/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0224 12:34:37.920064  574586 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0224 12:34:37.920436  574586 profile.go:143] Saving config to /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/addons-961822/config.json ...
	I0224 12:34:37.920458  574586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/addons-961822/config.json: {Name:mk6cffe3f03240bef50d877f9c20875b5e4715a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 12:34:37.938985  574586 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 to local cache
	I0224 12:34:37.939140  574586 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 in local cache directory
	I0224 12:34:37.939163  574586 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 in local cache directory, skipping pull
	I0224 12:34:37.939168  574586 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 exists in cache, skipping pull
	I0224 12:34:37.939179  574586 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 as a tarball
	I0224 12:34:37.939189  574586 cache.go:163] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 from local cache
	I0224 12:34:55.731168  574586 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 from cached tarball
	I0224 12:34:55.731208  574586 cache.go:230] Successfully downloaded all kic artifacts
	I0224 12:34:55.731266  574586 start.go:360] acquireMachinesLock for addons-961822: {Name:mkfb77230626a173608cf814489aa10d3ac26fbd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0224 12:34:55.732071  574586 start.go:364] duration metric: took 776.055µs to acquireMachinesLock for "addons-961822"
	I0224 12:34:55.732107  574586 start.go:93] Provisioning new machine with config: &{Name:addons-961822 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-961822 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0224 12:34:55.732194  574586 start.go:125] createHost starting for "" (driver="docker")
	I0224 12:34:55.735666  574586 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0224 12:34:55.735920  574586 start.go:159] libmachine.API.Create for "addons-961822" (driver="docker")
	I0224 12:34:55.735957  574586 client.go:168] LocalClient.Create starting
	I0224 12:34:55.736068  574586 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20451-568444/.minikube/certs/ca.pem
	I0224 12:34:56.228080  574586 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20451-568444/.minikube/certs/cert.pem
	I0224 12:34:56.969479  574586 cli_runner.go:164] Run: docker network inspect addons-961822 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0224 12:34:56.986427  574586 cli_runner.go:211] docker network inspect addons-961822 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0224 12:34:56.986518  574586 network_create.go:284] running [docker network inspect addons-961822] to gather additional debugging logs...
	I0224 12:34:56.986540  574586 cli_runner.go:164] Run: docker network inspect addons-961822
	W0224 12:34:57.005662  574586 cli_runner.go:211] docker network inspect addons-961822 returned with exit code 1
	I0224 12:34:57.005716  574586 network_create.go:287] error running [docker network inspect addons-961822]: docker network inspect addons-961822: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-961822 not found
	I0224 12:34:57.005731  574586 network_create.go:289] output of [docker network inspect addons-961822]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-961822 not found
	
	** /stderr **
	I0224 12:34:57.005853  574586 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0224 12:34:57.021747  574586 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019a1f00}
	I0224 12:34:57.021790  574586 network_create.go:124] attempt to create docker network addons-961822 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0224 12:34:57.021849  574586 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-961822 addons-961822
	I0224 12:34:57.087701  574586 network_create.go:108] docker network addons-961822 192.168.49.0/24 created
	I0224 12:34:57.087734  574586 kic.go:121] calculated static IP "192.168.49.2" for the "addons-961822" container
	I0224 12:34:57.087826  574586 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0224 12:34:57.102610  574586 cli_runner.go:164] Run: docker volume create addons-961822 --label name.minikube.sigs.k8s.io=addons-961822 --label created_by.minikube.sigs.k8s.io=true
	I0224 12:34:57.119462  574586 oci.go:103] Successfully created a docker volume addons-961822
	I0224 12:34:57.119551  574586 cli_runner.go:164] Run: docker run --rm --name addons-961822-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-961822 --entrypoint /usr/bin/test -v addons-961822:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 -d /var/lib
	I0224 12:34:58.853626  574586 cli_runner.go:217] Completed: docker run --rm --name addons-961822-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-961822 --entrypoint /usr/bin/test -v addons-961822:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 -d /var/lib: (1.734033246s)
	I0224 12:34:58.853654  574586 oci.go:107] Successfully prepared a docker volume addons-961822
	I0224 12:34:58.853675  574586 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0224 12:34:58.853695  574586 kic.go:194] Starting extracting preloaded images to volume ...
	I0224 12:34:58.853769  574586 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20451-568444/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-961822:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 -I lz4 -xf /preloaded.tar -C /extractDir
	I0224 12:35:03.381457  574586 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20451-568444/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-961822:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 -I lz4 -xf /preloaded.tar -C /extractDir: (4.527641409s)
	I0224 12:35:03.381492  574586 kic.go:203] duration metric: took 4.52779468s to extract preloaded images to volume ...
	W0224 12:35:03.381643  574586 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0224 12:35:03.381764  574586 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0224 12:35:03.442927  574586 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-961822 --name addons-961822 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-961822 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-961822 --network addons-961822 --ip 192.168.49.2 --volume addons-961822:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4
	I0224 12:35:03.759133  574586 cli_runner.go:164] Run: docker container inspect addons-961822 --format={{.State.Running}}
	I0224 12:35:03.784024  574586 cli_runner.go:164] Run: docker container inspect addons-961822 --format={{.State.Status}}
	I0224 12:35:03.809712  574586 cli_runner.go:164] Run: docker exec addons-961822 stat /var/lib/dpkg/alternatives/iptables
	I0224 12:35:03.861292  574586 oci.go:144] the created container "addons-961822" has a running status.
	I0224 12:35:03.861324  574586 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20451-568444/.minikube/machines/addons-961822/id_rsa...
	I0224 12:35:04.877294  574586 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20451-568444/.minikube/machines/addons-961822/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0224 12:35:04.898199  574586 cli_runner.go:164] Run: docker container inspect addons-961822 --format={{.State.Status}}
	I0224 12:35:04.915377  574586 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0224 12:35:04.915396  574586 kic_runner.go:114] Args: [docker exec --privileged addons-961822 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0224 12:35:04.959151  574586 cli_runner.go:164] Run: docker container inspect addons-961822 --format={{.State.Status}}
	I0224 12:35:04.975288  574586 machine.go:93] provisionDockerMachine start ...
	I0224 12:35:04.975392  574586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961822
	I0224 12:35:04.992269  574586 main.go:141] libmachine: Using SSH client type: native
	I0224 12:35:04.992560  574586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x414ca0] 0x4174e0 <nil>  [] 0s} 127.0.0.1 33506 <nil> <nil>}
	I0224 12:35:04.992576  574586 main.go:141] libmachine: About to run SSH command:
	hostname
	I0224 12:35:05.122498  574586 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-961822
	
	I0224 12:35:05.122525  574586 ubuntu.go:169] provisioning hostname "addons-961822"
	I0224 12:35:05.122603  574586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961822
	I0224 12:35:05.140351  574586 main.go:141] libmachine: Using SSH client type: native
	I0224 12:35:05.140624  574586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x414ca0] 0x4174e0 <nil>  [] 0s} 127.0.0.1 33506 <nil> <nil>}
	I0224 12:35:05.140642  574586 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-961822 && echo "addons-961822" | sudo tee /etc/hostname
	I0224 12:35:05.278689  574586 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-961822
	
	I0224 12:35:05.278767  574586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961822
	I0224 12:35:05.296026  574586 main.go:141] libmachine: Using SSH client type: native
	I0224 12:35:05.296283  574586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x414ca0] 0x4174e0 <nil>  [] 0s} 127.0.0.1 33506 <nil> <nil>}
	I0224 12:35:05.296306  574586 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-961822' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-961822/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-961822' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0224 12:35:05.423098  574586 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0224 12:35:05.423127  574586 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20451-568444/.minikube CaCertPath:/home/jenkins/minikube-integration/20451-568444/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20451-568444/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20451-568444/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20451-568444/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20451-568444/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20451-568444/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20451-568444/.minikube}
	I0224 12:35:05.423200  574586 ubuntu.go:177] setting up certificates
	I0224 12:35:05.423210  574586 provision.go:84] configureAuth start
	I0224 12:35:05.423298  574586 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-961822
	I0224 12:35:05.440200  574586 provision.go:143] copyHostCerts
	I0224 12:35:05.440292  574586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20451-568444/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20451-568444/.minikube/ca.pem (1082 bytes)
	I0224 12:35:05.440427  574586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20451-568444/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20451-568444/.minikube/cert.pem (1123 bytes)
	I0224 12:35:05.440489  574586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20451-568444/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20451-568444/.minikube/key.pem (1675 bytes)
	I0224 12:35:05.440540  574586 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20451-568444/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20451-568444/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20451-568444/.minikube/certs/ca-key.pem org=jenkins.addons-961822 san=[127.0.0.1 192.168.49.2 addons-961822 localhost minikube]
	I0224 12:35:06.145806  574586 provision.go:177] copyRemoteCerts
	I0224 12:35:06.145885  574586 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0224 12:35:06.145926  574586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961822
	I0224 12:35:06.165482  574586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/20451-568444/.minikube/machines/addons-961822/id_rsa Username:docker}
	I0224 12:35:06.260166  574586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-568444/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0224 12:35:06.284768  574586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-568444/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0224 12:35:06.308611  574586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-568444/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0224 12:35:06.332052  574586 provision.go:87] duration metric: took 908.817855ms to configureAuth
	I0224 12:35:06.332078  574586 ubuntu.go:193] setting minikube options for container-runtime
	I0224 12:35:06.332255  574586 config.go:182] Loaded profile config "addons-961822": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0224 12:35:06.332353  574586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961822
	I0224 12:35:06.349191  574586 main.go:141] libmachine: Using SSH client type: native
	I0224 12:35:06.349447  574586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x414ca0] 0x4174e0 <nil>  [] 0s} 127.0.0.1 33506 <nil> <nil>}
	I0224 12:35:06.349469  574586 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0224 12:35:06.578688  574586 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0224 12:35:06.578717  574586 machine.go:96] duration metric: took 1.603410082s to provisionDockerMachine
	I0224 12:35:06.578728  574586 client.go:171] duration metric: took 10.84276518s to LocalClient.Create
	I0224 12:35:06.578743  574586 start.go:167] duration metric: took 10.84282365s to libmachine.API.Create "addons-961822"
	I0224 12:35:06.578750  574586 start.go:293] postStartSetup for "addons-961822" (driver="docker")
	I0224 12:35:06.578760  574586 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0224 12:35:06.578825  574586 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0224 12:35:06.578868  574586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961822
	I0224 12:35:06.597415  574586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/20451-568444/.minikube/machines/addons-961822/id_rsa Username:docker}
	I0224 12:35:06.688268  574586 ssh_runner.go:195] Run: cat /etc/os-release
	I0224 12:35:06.691932  574586 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0224 12:35:06.691985  574586 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0224 12:35:06.691997  574586 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0224 12:35:06.692004  574586 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0224 12:35:06.692014  574586 filesync.go:126] Scanning /home/jenkins/minikube-integration/20451-568444/.minikube/addons for local assets ...
	I0224 12:35:06.692085  574586 filesync.go:126] Scanning /home/jenkins/minikube-integration/20451-568444/.minikube/files for local assets ...
	I0224 12:35:06.692107  574586 start.go:296] duration metric: took 113.351038ms for postStartSetup
	I0224 12:35:06.692542  574586 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-961822
	I0224 12:35:06.708603  574586 profile.go:143] Saving config to /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/addons-961822/config.json ...
	I0224 12:35:06.708950  574586 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0224 12:35:06.709050  574586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961822
	I0224 12:35:06.725574  574586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/20451-568444/.minikube/machines/addons-961822/id_rsa Username:docker}
	I0224 12:35:06.816175  574586 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0224 12:35:06.820824  574586 start.go:128] duration metric: took 11.088608372s to createHost
	I0224 12:35:06.820849  574586 start.go:83] releasing machines lock for "addons-961822", held for 11.088760905s
	I0224 12:35:06.820922  574586 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-961822
	I0224 12:35:06.837775  574586 ssh_runner.go:195] Run: cat /version.json
	I0224 12:35:06.837834  574586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961822
	I0224 12:35:06.838089  574586 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0224 12:35:06.838148  574586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961822
	I0224 12:35:06.856222  574586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/20451-568444/.minikube/machines/addons-961822/id_rsa Username:docker}
	I0224 12:35:06.867439  574586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/20451-568444/.minikube/machines/addons-961822/id_rsa Username:docker}
	I0224 12:35:06.942834  574586 ssh_runner.go:195] Run: systemctl --version
	I0224 12:35:07.083076  574586 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0224 12:35:07.224923  574586 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0224 12:35:07.229277  574586 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0224 12:35:07.250989  574586 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0224 12:35:07.251072  574586 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0224 12:35:07.289754  574586 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0224 12:35:07.289824  574586 start.go:495] detecting cgroup driver to use...
	I0224 12:35:07.289871  574586 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0224 12:35:07.289957  574586 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0224 12:35:07.305589  574586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0224 12:35:07.316867  574586 docker.go:217] disabling cri-docker service (if available) ...
	I0224 12:35:07.316934  574586 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0224 12:35:07.330554  574586 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0224 12:35:07.345167  574586 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0224 12:35:07.426461  574586 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0224 12:35:07.525095  574586 docker.go:233] disabling docker service ...
	I0224 12:35:07.525160  574586 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0224 12:35:07.545588  574586 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0224 12:35:07.557973  574586 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0224 12:35:07.641205  574586 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0224 12:35:07.729549  574586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0224 12:35:07.741536  574586 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0224 12:35:07.757665  574586 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0224 12:35:07.757739  574586 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0224 12:35:07.767969  574586 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0224 12:35:07.768038  574586 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0224 12:35:07.778580  574586 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0224 12:35:07.789146  574586 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0224 12:35:07.799336  574586 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0224 12:35:07.809203  574586 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0224 12:35:07.819775  574586 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0224 12:35:07.835841  574586 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0224 12:35:07.845546  574586 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0224 12:35:07.853940  574586 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0224 12:35:07.862126  574586 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 12:35:07.948521  574586 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0224 12:35:08.068983  574586 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0224 12:35:08.069119  574586 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0224 12:35:08.073525  574586 start.go:563] Will wait 60s for crictl version
	I0224 12:35:08.073591  574586 ssh_runner.go:195] Run: which crictl
	I0224 12:35:08.077294  574586 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0224 12:35:08.112605  574586 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0224 12:35:08.112706  574586 ssh_runner.go:195] Run: crio --version
	I0224 12:35:08.155854  574586 ssh_runner.go:195] Run: crio --version
	I0224 12:35:08.195862  574586 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.24.6 ...
	I0224 12:35:08.198756  574586 cli_runner.go:164] Run: docker network inspect addons-961822 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0224 12:35:08.215320  574586 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0224 12:35:08.219094  574586 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0224 12:35:08.230287  574586 kubeadm.go:883] updating cluster {Name:addons-961822 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-961822 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0224 12:35:08.230425  574586 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0224 12:35:08.230486  574586 ssh_runner.go:195] Run: sudo crictl images --output json
	I0224 12:35:08.309956  574586 crio.go:514] all images are preloaded for cri-o runtime.
	I0224 12:35:08.309980  574586 crio.go:433] Images already preloaded, skipping extraction
	I0224 12:35:08.310046  574586 ssh_runner.go:195] Run: sudo crictl images --output json
	I0224 12:35:08.352715  574586 crio.go:514] all images are preloaded for cri-o runtime.
	I0224 12:35:08.352740  574586 cache_images.go:84] Images are preloaded, skipping loading
	I0224 12:35:08.352748  574586 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.32.2 crio true true} ...
	I0224 12:35:08.352839  574586 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-961822 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:addons-961822 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0224 12:35:08.352927  574586 ssh_runner.go:195] Run: crio config
	I0224 12:35:08.410606  574586 cni.go:84] Creating CNI manager for ""
	I0224 12:35:08.410631  574586 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0224 12:35:08.410643  574586 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0224 12:35:08.410666  574586 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-961822 NodeName:addons-961822 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0224 12:35:08.410794  574586 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-961822"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0224 12:35:08.410869  574586 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0224 12:35:08.419861  574586 binaries.go:44] Found k8s binaries, skipping transfer
	I0224 12:35:08.419934  574586 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0224 12:35:08.428772  574586 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0224 12:35:08.447450  574586 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0224 12:35:08.466398  574586 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2287 bytes)
	I0224 12:35:08.483943  574586 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0224 12:35:08.487554  574586 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0224 12:35:08.498376  574586 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 12:35:08.592128  574586 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0224 12:35:08.606724  574586 certs.go:68] Setting up /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/addons-961822 for IP: 192.168.49.2
	I0224 12:35:08.606744  574586 certs.go:194] generating shared ca certs ...
	I0224 12:35:08.606759  574586 certs.go:226] acquiring lock for ca certs: {Name:mk4966073c450a993eff1105cc172ef4b89d49a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 12:35:08.607491  574586 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20451-568444/.minikube/ca.key
	I0224 12:35:09.630177  574586 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20451-568444/.minikube/ca.crt ...
	I0224 12:35:09.630211  574586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20451-568444/.minikube/ca.crt: {Name:mk9419d75d7578a8e37716317d372b8ac22bed8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 12:35:09.630420  574586 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20451-568444/.minikube/ca.key ...
	I0224 12:35:09.630435  574586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20451-568444/.minikube/ca.key: {Name:mk469d87a42976de677d683b22cdbbeb5741e5dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 12:35:09.630530  574586 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20451-568444/.minikube/proxy-client-ca.key
	I0224 12:35:09.994976  574586 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20451-568444/.minikube/proxy-client-ca.crt ...
	I0224 12:35:09.995017  574586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20451-568444/.minikube/proxy-client-ca.crt: {Name:mk328beb9ed9ee94c8def9a104f2281459e07ec4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 12:35:09.995843  574586 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20451-568444/.minikube/proxy-client-ca.key ...
	I0224 12:35:09.995864  574586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20451-568444/.minikube/proxy-client-ca.key: {Name:mk2b1f06a6a84895bdebac6000a8f39a1459c9e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 12:35:09.996508  574586 certs.go:256] generating profile certs ...
	I0224 12:35:09.996584  574586 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/addons-961822/client.key
	I0224 12:35:09.996602  574586 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/addons-961822/client.crt with IP's: []
	I0224 12:35:10.250564  574586 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/addons-961822/client.crt ...
	I0224 12:35:10.250597  574586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/addons-961822/client.crt: {Name:mk904b25a7f93c9402bdd8f2759c40ad03cb95d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 12:35:10.251381  574586 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/addons-961822/client.key ...
	I0224 12:35:10.251400  574586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/addons-961822/client.key: {Name:mkd8a40b70c3a56544253f167c6bd7ad54315d63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 12:35:10.251493  574586 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/addons-961822/apiserver.key.0509d70c
	I0224 12:35:10.251516  574586 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/addons-961822/apiserver.crt.0509d70c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0224 12:35:10.654863  574586 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/addons-961822/apiserver.crt.0509d70c ...
	I0224 12:35:10.654897  574586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/addons-961822/apiserver.crt.0509d70c: {Name:mkcae20eea5987e1a13cae2656096d96b535bdde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 12:35:10.655085  574586 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/addons-961822/apiserver.key.0509d70c ...
	I0224 12:35:10.655099  574586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/addons-961822/apiserver.key.0509d70c: {Name:mk36bcda6592114232a0383b83aa1ece8447e763 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 12:35:10.655841  574586 certs.go:381] copying /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/addons-961822/apiserver.crt.0509d70c -> /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/addons-961822/apiserver.crt
	I0224 12:35:10.655962  574586 certs.go:385] copying /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/addons-961822/apiserver.key.0509d70c -> /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/addons-961822/apiserver.key
	I0224 12:35:10.656020  574586 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/addons-961822/proxy-client.key
	I0224 12:35:10.656041  574586 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/addons-961822/proxy-client.crt with IP's: []
	I0224 12:35:11.067312  574586 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/addons-961822/proxy-client.crt ...
	I0224 12:35:11.067346  574586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/addons-961822/proxy-client.crt: {Name:mkf08f7b8fbda7cf1f405c870df7c20ab661f656 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 12:35:11.068149  574586 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/addons-961822/proxy-client.key ...
	I0224 12:35:11.068170  574586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/addons-961822/proxy-client.key: {Name:mk5000359ba393068280a67a66cd6a72770a7022 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 12:35:11.068419  574586 certs.go:484] found cert: /home/jenkins/minikube-integration/20451-568444/.minikube/certs/ca-key.pem (1675 bytes)
	I0224 12:35:11.068475  574586 certs.go:484] found cert: /home/jenkins/minikube-integration/20451-568444/.minikube/certs/ca.pem (1082 bytes)
	I0224 12:35:11.068507  574586 certs.go:484] found cert: /home/jenkins/minikube-integration/20451-568444/.minikube/certs/cert.pem (1123 bytes)
	I0224 12:35:11.068541  574586 certs.go:484] found cert: /home/jenkins/minikube-integration/20451-568444/.minikube/certs/key.pem (1675 bytes)
	I0224 12:35:11.069189  574586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-568444/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0224 12:35:11.094904  574586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-568444/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0224 12:35:11.120719  574586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-568444/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0224 12:35:11.146316  574586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-568444/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0224 12:35:11.171180  574586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/addons-961822/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0224 12:35:11.196135  574586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/addons-961822/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0224 12:35:11.220999  574586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/addons-961822/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0224 12:35:11.245573  574586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/addons-961822/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0224 12:35:11.269961  574586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-568444/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0224 12:35:11.294927  574586 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0224 12:35:11.313188  574586 ssh_runner.go:195] Run: openssl version
	I0224 12:35:11.318767  574586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0224 12:35:11.328485  574586 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0224 12:35:11.332255  574586 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 24 12:35 /usr/share/ca-certificates/minikubeCA.pem
	I0224 12:35:11.332378  574586 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0224 12:35:11.340318  574586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0224 12:35:11.349793  574586 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0224 12:35:11.353073  574586 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0224 12:35:11.353126  574586 kubeadm.go:392] StartCluster: {Name:addons-961822 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-961822 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0224 12:35:11.353210  574586 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0224 12:35:11.353271  574586 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0224 12:35:11.389843  574586 cri.go:89] found id: ""
	I0224 12:35:11.389961  574586 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0224 12:35:11.398836  574586 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0224 12:35:11.407521  574586 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0224 12:35:11.407613  574586 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0224 12:35:11.418949  574586 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0224 12:35:11.418973  574586 kubeadm.go:157] found existing configuration files:
	
	I0224 12:35:11.419053  574586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0224 12:35:11.428095  574586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0224 12:35:11.428162  574586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0224 12:35:11.436618  574586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0224 12:35:11.445432  574586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0224 12:35:11.445496  574586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0224 12:35:11.453666  574586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0224 12:35:11.462379  574586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0224 12:35:11.462463  574586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0224 12:35:11.470890  574586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0224 12:35:11.479118  574586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0224 12:35:11.479201  574586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0224 12:35:11.487038  574586 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0224 12:35:11.526120  574586 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0224 12:35:11.526429  574586 kubeadm.go:310] [preflight] Running pre-flight checks
	I0224 12:35:11.547262  574586 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0224 12:35:11.547335  574586 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1077-aws
	I0224 12:35:11.547376  574586 kubeadm.go:310] OS: Linux
	I0224 12:35:11.547426  574586 kubeadm.go:310] CGROUPS_CPU: enabled
	I0224 12:35:11.547479  574586 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0224 12:35:11.547529  574586 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0224 12:35:11.547579  574586 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0224 12:35:11.547631  574586 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0224 12:35:11.547685  574586 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0224 12:35:11.547734  574586 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0224 12:35:11.547787  574586 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0224 12:35:11.547837  574586 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0224 12:35:11.618435  574586 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0224 12:35:11.618554  574586 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0224 12:35:11.618650  574586 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0224 12:35:11.627014  574586 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0224 12:35:11.633597  574586 out.go:235]   - Generating certificates and keys ...
	I0224 12:35:11.633695  574586 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0224 12:35:11.633768  574586 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0224 12:35:12.059782  574586 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0224 12:35:12.288655  574586 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0224 12:35:12.764993  574586 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0224 12:35:12.986758  574586 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0224 12:35:13.388184  574586 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0224 12:35:13.388343  574586 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-961822 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0224 12:35:14.077520  574586 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0224 12:35:14.077894  574586 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-961822 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0224 12:35:14.431446  574586 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0224 12:35:14.763950  574586 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0224 12:35:15.093676  574586 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0224 12:35:15.093974  574586 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0224 12:35:15.298638  574586 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0224 12:35:15.748862  574586 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0224 12:35:16.203737  574586 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0224 12:35:16.551502  574586 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0224 12:35:16.741034  574586 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0224 12:35:16.741794  574586 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0224 12:35:16.746751  574586 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0224 12:35:16.750305  574586 out.go:235]   - Booting up control plane ...
	I0224 12:35:16.750416  574586 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0224 12:35:16.750492  574586 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0224 12:35:16.751346  574586 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0224 12:35:16.761737  574586 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0224 12:35:16.768685  574586 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0224 12:35:16.768750  574586 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0224 12:35:16.860212  574586 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0224 12:35:16.860342  574586 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0224 12:35:17.861381  574586 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001435032s
	I0224 12:35:17.861475  574586 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0224 12:35:23.863090  574586 kubeadm.go:310] [api-check] The API server is healthy after 6.001724108s
	I0224 12:35:23.883008  574586 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0224 12:35:23.900754  574586 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0224 12:35:23.937509  574586 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0224 12:35:23.937725  574586 kubeadm.go:310] [mark-control-plane] Marking the node addons-961822 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0224 12:35:23.950058  574586 kubeadm.go:310] [bootstrap-token] Using token: 3mcrbz.z49r0xzl3wzf5f4h
	I0224 12:35:23.954962  574586 out.go:235]   - Configuring RBAC rules ...
	I0224 12:35:23.955106  574586 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0224 12:35:23.959564  574586 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0224 12:35:23.967900  574586 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0224 12:35:23.971833  574586 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0224 12:35:23.975642  574586 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0224 12:35:23.980654  574586 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0224 12:35:24.269971  574586 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0224 12:35:24.718595  574586 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0224 12:35:25.273212  574586 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0224 12:35:25.274490  574586 kubeadm.go:310] 
	I0224 12:35:25.274564  574586 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0224 12:35:25.274590  574586 kubeadm.go:310] 
	I0224 12:35:25.274686  574586 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0224 12:35:25.274692  574586 kubeadm.go:310] 
	I0224 12:35:25.274718  574586 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0224 12:35:25.274777  574586 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0224 12:35:25.274828  574586 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0224 12:35:25.274833  574586 kubeadm.go:310] 
	I0224 12:35:25.274886  574586 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0224 12:35:25.274890  574586 kubeadm.go:310] 
	I0224 12:35:25.274937  574586 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0224 12:35:25.274942  574586 kubeadm.go:310] 
	I0224 12:35:25.274994  574586 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0224 12:35:25.275070  574586 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0224 12:35:25.275137  574586 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0224 12:35:25.275142  574586 kubeadm.go:310] 
	I0224 12:35:25.275226  574586 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0224 12:35:25.275323  574586 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0224 12:35:25.275330  574586 kubeadm.go:310] 
	I0224 12:35:25.275414  574586 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 3mcrbz.z49r0xzl3wzf5f4h \
	I0224 12:35:25.275518  574586 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:62cf6495cf77910e7dbec17c358972b674ae4e71421d0d177be28fdc27e66e4c \
	I0224 12:35:25.275539  574586 kubeadm.go:310] 	--control-plane 
	I0224 12:35:25.275543  574586 kubeadm.go:310] 
	I0224 12:35:25.275628  574586 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0224 12:35:25.275633  574586 kubeadm.go:310] 
	I0224 12:35:25.275714  574586 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 3mcrbz.z49r0xzl3wzf5f4h \
	I0224 12:35:25.275817  574586 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:62cf6495cf77910e7dbec17c358972b674ae4e71421d0d177be28fdc27e66e4c 
	I0224 12:35:25.278242  574586 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0224 12:35:25.278502  574586 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1077-aws\n", err: exit status 1
	I0224 12:35:25.278636  574586 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0224 12:35:25.278649  574586 cni.go:84] Creating CNI manager for ""
	I0224 12:35:25.278657  574586 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0224 12:35:25.281755  574586 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0224 12:35:25.284650  574586 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0224 12:35:25.288396  574586 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0224 12:35:25.288417  574586 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0224 12:35:25.306569  574586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0224 12:35:25.581754  574586 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0224 12:35:25.581884  574586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 12:35:25.581965  574586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-961822 minikube.k8s.io/updated_at=2025_02_24T12_35_25_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=b76650f53499dbb51707efa4a87e94b72d747650 minikube.k8s.io/name=addons-961822 minikube.k8s.io/primary=true
	I0224 12:35:25.589081  574586 ops.go:34] apiserver oom_adj: -16
	I0224 12:35:25.721151  574586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 12:35:26.222290  574586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 12:35:26.721791  574586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 12:35:27.221553  574586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 12:35:27.721779  574586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 12:35:28.221215  574586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 12:35:28.721902  574586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 12:35:29.221765  574586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 12:35:29.352078  574586 kubeadm.go:1113] duration metric: took 3.770234393s to wait for elevateKubeSystemPrivileges
	I0224 12:35:29.352110  574586 kubeadm.go:394] duration metric: took 17.998983554s to StartCluster
	I0224 12:35:29.352129  574586 settings.go:142] acquiring lock: {Name:mk52bdb5cd3228f0beb4dda622d9fbebd6ecd272 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 12:35:29.352852  574586 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20451-568444/kubeconfig
	I0224 12:35:29.353306  574586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20451-568444/kubeconfig: {Name:mkeb0ff92c593cfa317c63842a536afd805c0d49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 12:35:29.353519  574586 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0224 12:35:29.353657  574586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0224 12:35:29.353910  574586 config.go:182] Loaded profile config "addons-961822": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0224 12:35:29.353943  574586 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0224 12:35:29.354029  574586 addons.go:69] Setting yakd=true in profile "addons-961822"
	I0224 12:35:29.354045  574586 addons.go:238] Setting addon yakd=true in "addons-961822"
	I0224 12:35:29.354069  574586 host.go:66] Checking if "addons-961822" exists ...
	I0224 12:35:29.354587  574586 cli_runner.go:164] Run: docker container inspect addons-961822 --format={{.State.Status}}
	I0224 12:35:29.354996  574586 addons.go:69] Setting inspektor-gadget=true in profile "addons-961822"
	I0224 12:35:29.355020  574586 addons.go:238] Setting addon inspektor-gadget=true in "addons-961822"
	I0224 12:35:29.355042  574586 host.go:66] Checking if "addons-961822" exists ...
	I0224 12:35:29.355487  574586 cli_runner.go:164] Run: docker container inspect addons-961822 --format={{.State.Status}}
	I0224 12:35:29.355990  574586 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-961822"
	I0224 12:35:29.356014  574586 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-961822"
	I0224 12:35:29.356036  574586 host.go:66] Checking if "addons-961822" exists ...
	I0224 12:35:29.356459  574586 cli_runner.go:164] Run: docker container inspect addons-961822 --format={{.State.Status}}
	I0224 12:35:29.359818  574586 addons.go:69] Setting metrics-server=true in profile "addons-961822"
	I0224 12:35:29.359852  574586 addons.go:238] Setting addon metrics-server=true in "addons-961822"
	I0224 12:35:29.359890  574586 host.go:66] Checking if "addons-961822" exists ...
	I0224 12:35:29.360332  574586 cli_runner.go:164] Run: docker container inspect addons-961822 --format={{.State.Status}}
	I0224 12:35:29.363395  574586 addons.go:69] Setting cloud-spanner=true in profile "addons-961822"
	I0224 12:35:29.363431  574586 addons.go:238] Setting addon cloud-spanner=true in "addons-961822"
	I0224 12:35:29.363466  574586 host.go:66] Checking if "addons-961822" exists ...
	I0224 12:35:29.363921  574586 cli_runner.go:164] Run: docker container inspect addons-961822 --format={{.State.Status}}
	I0224 12:35:29.371091  574586 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-961822"
	I0224 12:35:29.371125  574586 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-961822"
	I0224 12:35:29.371165  574586 host.go:66] Checking if "addons-961822" exists ...
	I0224 12:35:29.371684  574586 cli_runner.go:164] Run: docker container inspect addons-961822 --format={{.State.Status}}
	I0224 12:35:29.371996  574586 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-961822"
	I0224 12:35:29.372050  574586 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-961822"
	I0224 12:35:29.372078  574586 host.go:66] Checking if "addons-961822" exists ...
	I0224 12:35:29.372493  574586 cli_runner.go:164] Run: docker container inspect addons-961822 --format={{.State.Status}}
	I0224 12:35:29.379459  574586 addons.go:69] Setting registry=true in profile "addons-961822"
	I0224 12:35:29.379492  574586 addons.go:238] Setting addon registry=true in "addons-961822"
	I0224 12:35:29.379528  574586 host.go:66] Checking if "addons-961822" exists ...
	I0224 12:35:29.379996  574586 cli_runner.go:164] Run: docker container inspect addons-961822 --format={{.State.Status}}
	I0224 12:35:29.391324  574586 addons.go:69] Setting default-storageclass=true in profile "addons-961822"
	I0224 12:35:29.391379  574586 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-961822"
	I0224 12:35:29.391723  574586 cli_runner.go:164] Run: docker container inspect addons-961822 --format={{.State.Status}}
	I0224 12:35:29.395652  574586 addons.go:69] Setting storage-provisioner=true in profile "addons-961822"
	I0224 12:35:29.395693  574586 addons.go:238] Setting addon storage-provisioner=true in "addons-961822"
	I0224 12:35:29.395737  574586 host.go:66] Checking if "addons-961822" exists ...
	I0224 12:35:29.396208  574586 cli_runner.go:164] Run: docker container inspect addons-961822 --format={{.State.Status}}
	I0224 12:35:29.412381  574586 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-961822"
	I0224 12:35:29.414059  574586 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-961822"
	I0224 12:35:29.414439  574586 cli_runner.go:164] Run: docker container inspect addons-961822 --format={{.State.Status}}
	I0224 12:35:29.419695  574586 addons.go:69] Setting gcp-auth=true in profile "addons-961822"
	I0224 12:35:29.419732  574586 mustload.go:65] Loading cluster: addons-961822
	I0224 12:35:29.419926  574586 config.go:182] Loaded profile config "addons-961822": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0224 12:35:29.420172  574586 cli_runner.go:164] Run: docker container inspect addons-961822 --format={{.State.Status}}
	I0224 12:35:29.443204  574586 addons.go:69] Setting ingress=true in profile "addons-961822"
	I0224 12:35:29.443298  574586 addons.go:238] Setting addon ingress=true in "addons-961822"
	I0224 12:35:29.443348  574586 host.go:66] Checking if "addons-961822" exists ...
	I0224 12:35:29.443832  574586 cli_runner.go:164] Run: docker container inspect addons-961822 --format={{.State.Status}}
	I0224 12:35:29.448678  574586 addons.go:69] Setting volcano=true in profile "addons-961822"
	I0224 12:35:29.448711  574586 addons.go:238] Setting addon volcano=true in "addons-961822"
	I0224 12:35:29.448758  574586 host.go:66] Checking if "addons-961822" exists ...
	I0224 12:35:29.449249  574586 cli_runner.go:164] Run: docker container inspect addons-961822 --format={{.State.Status}}
	I0224 12:35:29.463571  574586 addons.go:69] Setting ingress-dns=true in profile "addons-961822"
	I0224 12:35:29.463618  574586 addons.go:238] Setting addon ingress-dns=true in "addons-961822"
	I0224 12:35:29.463665  574586 host.go:66] Checking if "addons-961822" exists ...
	I0224 12:35:29.464149  574586 cli_runner.go:164] Run: docker container inspect addons-961822 --format={{.State.Status}}
	I0224 12:35:29.472559  574586 addons.go:69] Setting volumesnapshots=true in profile "addons-961822"
	I0224 12:35:29.472595  574586 addons.go:238] Setting addon volumesnapshots=true in "addons-961822"
	I0224 12:35:29.472635  574586 host.go:66] Checking if "addons-961822" exists ...
	I0224 12:35:29.473158  574586 cli_runner.go:164] Run: docker container inspect addons-961822 --format={{.State.Status}}
	I0224 12:35:29.493754  574586 out.go:177] * Verifying Kubernetes components...
	I0224 12:35:29.508198  574586 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.37.0
	I0224 12:35:29.562986  574586 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0224 12:35:29.565893  574586 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0224 12:35:29.565977  574586 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0224 12:35:29.566064  574586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961822
	I0224 12:35:29.586186  574586 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.29
	I0224 12:35:29.587102  574586 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0224 12:35:29.591593  574586 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0224 12:35:29.616628  574586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0224 12:35:29.616788  574586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961822
	I0224 12:35:29.636242  574586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0224 12:35:29.638833  574586 addons.go:238] Setting addon default-storageclass=true in "addons-961822"
	I0224 12:35:29.638925  574586 host.go:66] Checking if "addons-961822" exists ...
	I0224 12:35:29.639654  574586 cli_runner.go:164] Run: docker container inspect addons-961822 --format={{.State.Status}}
	I0224 12:35:29.591982  574586 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 12:35:29.656262  574586 host.go:66] Checking if "addons-961822" exists ...
	I0224 12:35:29.658019  574586 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0224 12:35:29.601740  574586 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0224 12:35:29.616611  574586 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0224 12:35:29.680382  574586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0224 12:35:29.680481  574586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961822
	I0224 12:35:29.680765  574586 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-961822"
	I0224 12:35:29.687393  574586 host.go:66] Checking if "addons-961822" exists ...
	I0224 12:35:29.687842  574586 cli_runner.go:164] Run: docker container inspect addons-961822 --format={{.State.Status}}
	I0224 12:35:29.670332  574586 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I0224 12:35:29.690420  574586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961822
	I0224 12:35:29.673155  574586 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I0224 12:35:29.673146  574586 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	W0224 12:35:29.703591  574586 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0224 12:35:29.728820  574586 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0224 12:35:29.734730  574586 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0224 12:35:29.735159  574586 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0224 12:35:29.737459  574586 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0224 12:35:29.737604  574586 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0224 12:35:29.737616  574586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0224 12:35:29.737680  574586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961822
	I0224 12:35:29.744273  574586 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0224 12:35:29.747407  574586 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0224 12:35:29.750203  574586 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0224 12:35:29.750416  574586 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0224 12:35:29.750437  574586 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0224 12:35:29.750504  574586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961822
	I0224 12:35:29.751923  574586 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0224 12:35:29.751940  574586 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0224 12:35:29.752002  574586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961822
	I0224 12:35:29.769067  574586 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0224 12:35:29.770599  574586 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0224 12:35:29.770626  574586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0224 12:35:29.770691  574586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961822
	I0224 12:35:29.771667  574586 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I0224 12:35:29.775433  574586 out.go:177]   - Using image docker.io/registry:2.8.3
	I0224 12:35:29.781201  574586 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0224 12:35:29.781227  574586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0224 12:35:29.781293  574586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961822
	I0224 12:35:29.792974  574586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/20451-568444/.minikube/machines/addons-961822/id_rsa Username:docker}
	I0224 12:35:29.800395  574586 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0224 12:35:29.807371  574586 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0224 12:35:29.808153  574586 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0224 12:35:29.808175  574586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0224 12:35:29.808238  574586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961822
	I0224 12:35:29.808753  574586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/20451-568444/.minikube/machines/addons-961822/id_rsa Username:docker}
	I0224 12:35:29.810420  574586 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0224 12:35:29.810440  574586 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0224 12:35:29.810506  574586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961822
	I0224 12:35:29.831608  574586 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0224 12:35:29.839210  574586 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I0224 12:35:29.844314  574586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/20451-568444/.minikube/machines/addons-961822/id_rsa Username:docker}
	I0224 12:35:29.845983  574586 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0224 12:35:29.846005  574586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0224 12:35:29.846067  574586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961822
	I0224 12:35:29.846256  574586 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0224 12:35:29.849341  574586 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0224 12:35:29.849373  574586 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0224 12:35:29.849445  574586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961822
	I0224 12:35:29.868319  574586 out.go:177]   - Using image docker.io/busybox:stable
	I0224 12:35:29.872482  574586 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0224 12:35:29.882750  574586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/20451-568444/.minikube/machines/addons-961822/id_rsa Username:docker}
	I0224 12:35:29.883884  574586 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0224 12:35:29.883902  574586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0224 12:35:29.883961  574586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961822
	I0224 12:35:29.915498  574586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/20451-568444/.minikube/machines/addons-961822/id_rsa Username:docker}
	I0224 12:35:29.959500  574586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/20451-568444/.minikube/machines/addons-961822/id_rsa Username:docker}
	I0224 12:35:29.995391  574586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/20451-568444/.minikube/machines/addons-961822/id_rsa Username:docker}
	I0224 12:35:29.995441  574586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/20451-568444/.minikube/machines/addons-961822/id_rsa Username:docker}
	I0224 12:35:29.995408  574586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/20451-568444/.minikube/machines/addons-961822/id_rsa Username:docker}
	I0224 12:35:30.032625  574586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/20451-568444/.minikube/machines/addons-961822/id_rsa Username:docker}
	I0224 12:35:30.047461  574586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/20451-568444/.minikube/machines/addons-961822/id_rsa Username:docker}
	W0224 12:35:30.051050  574586 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0224 12:35:30.051164  574586 retry.go:31] will retry after 374.7022ms: ssh: handshake failed: EOF
	I0224 12:35:30.055445  574586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/20451-568444/.minikube/machines/addons-961822/id_rsa Username:docker}
	I0224 12:35:30.079111  574586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/20451-568444/.minikube/machines/addons-961822/id_rsa Username:docker}
	W0224 12:35:30.080463  574586 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0224 12:35:30.080493  574586 retry.go:31] will retry after 337.294956ms: ssh: handshake failed: EOF
	I0224 12:35:30.085350  574586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/20451-568444/.minikube/machines/addons-961822/id_rsa Username:docker}
	W0224 12:35:30.086596  574586 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0224 12:35:30.086632  574586 retry.go:31] will retry after 195.259125ms: ssh: handshake failed: EOF
	I0224 12:35:30.341906  574586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0224 12:35:30.353929  574586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0224 12:35:30.357652  574586 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0224 12:35:30.357727  574586 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0224 12:35:30.395111  574586 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0224 12:35:30.395187  574586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14539 bytes)
	I0224 12:35:30.414892  574586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0224 12:35:30.431053  574586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0224 12:35:30.508620  574586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0224 12:35:30.519251  574586 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0224 12:35:30.519274  574586 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0224 12:35:30.533822  574586 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0224 12:35:30.533848  574586 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0224 12:35:30.537650  574586 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0224 12:35:30.537671  574586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0224 12:35:30.548223  574586 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0224 12:35:30.548285  574586 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0224 12:35:30.558333  574586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0224 12:35:30.610299  574586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0224 12:35:30.707228  574586 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0224 12:35:30.707308  574586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0224 12:35:30.757635  574586 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0224 12:35:30.757706  574586 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0224 12:35:30.767120  574586 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0224 12:35:30.767188  574586 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0224 12:35:30.774038  574586 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0224 12:35:30.774103  574586 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0224 12:35:30.821004  574586 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0224 12:35:30.821083  574586 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0224 12:35:30.935306  574586 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0224 12:35:30.935417  574586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0224 12:35:30.939969  574586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0224 12:35:30.952846  574586 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0224 12:35:30.952928  574586 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0224 12:35:30.964360  574586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0224 12:35:30.969634  574586 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0224 12:35:30.969712  574586 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0224 12:35:30.972967  574586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0224 12:35:31.050992  574586 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0224 12:35:31.051061  574586 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0224 12:35:31.121537  574586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0224 12:35:31.189563  574586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0224 12:35:31.193171  574586 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0224 12:35:31.193202  574586 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0224 12:35:31.223202  574586 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0224 12:35:31.223231  574586 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0224 12:35:31.339132  574586 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0224 12:35:31.339158  574586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0224 12:35:31.404980  574586 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0224 12:35:31.405010  574586 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0224 12:35:31.486699  574586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0224 12:35:31.497587  574586 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0224 12:35:31.497614  574586 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0224 12:35:31.623841  574586 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0224 12:35:31.623925  574586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0224 12:35:31.735323  574586 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0224 12:35:31.735388  574586 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0224 12:35:31.810077  574586 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0224 12:35:31.810161  574586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0224 12:35:31.967108  574586 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0224 12:35:31.967186  574586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0224 12:35:32.127149  574586 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0224 12:35:32.127215  574586 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0224 12:35:32.286296  574586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0224 12:35:33.491056  574586 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.854776814s)
	I0224 12:35:33.491187  574586 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0224 12:35:33.491156  574586 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (3.842534465s)
	I0224 12:35:33.491311  574586 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0224 12:35:34.691645  574586 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-961822" context rescaled to 1 replicas
	I0224 12:35:35.330520  574586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.98857904s)
	I0224 12:35:35.330580  574586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.976576396s)
	I0224 12:35:35.371304  574586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.956378086s)
	I0224 12:35:35.828961  574586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.397805115s)
	I0224 12:35:36.893162  574586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.38446594s)
	I0224 12:35:36.893725  574586 addons.go:479] Verifying addon ingress=true in "addons-961822"
	I0224 12:35:36.893313  574586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (6.334899461s)
	I0224 12:35:36.893370  574586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.953328058s)
	I0224 12:35:36.894044  574586 addons.go:479] Verifying addon registry=true in "addons-961822"
	I0224 12:35:36.893387  574586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.928945375s)
	I0224 12:35:36.893423  574586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.92039332s)
	I0224 12:35:36.893454  574586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.771893186s)
	I0224 12:35:36.893501  574586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.703919062s)
	I0224 12:35:36.893569  574586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.406842156s)
	I0224 12:35:36.893334  574586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (6.282972613s)
	I0224 12:35:36.895573  574586 addons.go:479] Verifying addon metrics-server=true in "addons-961822"
	W0224 12:35:36.895617  574586 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0224 12:35:36.895633  574586 retry.go:31] will retry after 174.348552ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0224 12:35:36.898362  574586 out.go:177] * Verifying registry addon...
	I0224 12:35:36.898512  574586 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-961822 service yakd-dashboard -n yakd-dashboard
	
	I0224 12:35:36.898555  574586 out.go:177] * Verifying ingress addon...
	I0224 12:35:36.902182  574586 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0224 12:35:36.903132  574586 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0224 12:35:36.917620  574586 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0224 12:35:36.917640  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:35:36.918008  574586 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0224 12:35:36.918052  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0224 12:35:36.921413  574586 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0224 12:35:37.070629  574586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0224 12:35:37.179813  574586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.893405454s)
	I0224 12:35:37.179848  574586 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-961822"
	I0224 12:35:37.180022  574586 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.688677603s)
	I0224 12:35:37.180857  574586 node_ready.go:35] waiting up to 6m0s for node "addons-961822" to be "Ready" ...
	I0224 12:35:37.183090  574586 out.go:177] * Verifying csi-hostpath-driver addon...
	I0224 12:35:37.186727  574586 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0224 12:35:37.201153  574586 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0224 12:35:37.201180  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:35:37.408208  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:35:37.408699  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:35:37.690016  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:35:37.906268  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:35:37.906460  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:35:38.190277  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:35:38.405354  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:35:38.406511  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:35:38.690392  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:35:38.905490  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:35:38.905856  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:35:39.184158  574586 node_ready.go:53] node "addons-961822" has status "Ready":"False"
	I0224 12:35:39.190268  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:35:39.406467  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:35:39.406637  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:35:39.696038  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:35:39.840170  574586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.769491375s)
	I0224 12:35:39.905672  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:35:39.906292  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:35:40.190276  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:35:40.405875  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:35:40.406481  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:35:40.690425  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:35:40.745387  574586 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0224 12:35:40.745494  574586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961822
	I0224 12:35:40.762581  574586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/20451-568444/.minikube/machines/addons-961822/id_rsa Username:docker}
	I0224 12:35:40.865311  574586 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0224 12:35:40.883427  574586 addons.go:238] Setting addon gcp-auth=true in "addons-961822"
	I0224 12:35:40.883528  574586 host.go:66] Checking if "addons-961822" exists ...
	I0224 12:35:40.884006  574586 cli_runner.go:164] Run: docker container inspect addons-961822 --format={{.State.Status}}
	I0224 12:35:40.901590  574586 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0224 12:35:40.901642  574586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961822
	I0224 12:35:40.907744  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:35:40.913288  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:35:40.923479  574586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/20451-568444/.minikube/machines/addons-961822/id_rsa Username:docker}
	I0224 12:35:41.022504  574586 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0224 12:35:41.025494  574586 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0224 12:35:41.028418  574586 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0224 12:35:41.028443  574586 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0224 12:35:41.047212  574586 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0224 12:35:41.047257  574586 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0224 12:35:41.065105  574586 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0224 12:35:41.065129  574586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0224 12:35:41.083204  574586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0224 12:35:41.184457  574586 node_ready.go:53] node "addons-961822" has status "Ready":"False"
	I0224 12:35:41.190748  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:35:41.406976  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:35:41.407123  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:35:41.596487  574586 addons.go:479] Verifying addon gcp-auth=true in "addons-961822"
	I0224 12:35:41.601692  574586 out.go:177] * Verifying gcp-auth addon...
	I0224 12:35:41.606339  574586 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0224 12:35:41.612679  574586 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0224 12:35:41.612704  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:35:41.713712  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:35:41.905734  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:35:41.907089  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:35:42.110484  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:35:42.191872  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:35:42.406249  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:35:42.406549  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:35:42.609682  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:35:42.690483  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:35:42.905660  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:35:42.906668  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:35:43.109909  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:35:43.190190  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:35:43.405185  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:35:43.406189  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:35:43.609201  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:35:43.684037  574586 node_ready.go:53] node "addons-961822" has status "Ready":"False"
	I0224 12:35:43.689931  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:35:43.906753  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:35:43.907340  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:35:44.109423  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:35:44.210690  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:35:44.405463  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:35:44.406419  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:35:44.609003  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:35:44.690437  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:35:44.906424  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:35:44.906510  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:35:45.110180  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:35:45.191739  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:35:45.406282  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:35:45.406601  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:35:45.609449  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:35:45.690204  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:35:45.905681  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:35:45.905790  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:35:46.109680  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:35:46.184282  574586 node_ready.go:53] node "addons-961822" has status "Ready":"False"
	I0224 12:35:46.189783  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:35:46.406769  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:35:46.407278  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:35:46.609773  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:35:46.690009  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:35:46.905717  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:35:46.906171  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:35:47.108974  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:35:47.189928  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:35:47.406555  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:35:47.406716  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:35:47.609667  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:35:47.689756  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:35:47.905836  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:35:47.906160  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:35:48.109969  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:35:48.189989  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:35:48.406095  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:35:48.406417  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:35:48.608927  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:35:48.684748  574586 node_ready.go:53] node "addons-961822" has status "Ready":"False"
	I0224 12:35:48.689500  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:35:48.905933  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:35:48.906152  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:35:49.109971  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:35:49.189521  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:35:49.406432  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:35:49.406546  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:35:49.609873  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:35:49.689472  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:35:49.906058  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:35:49.906200  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:35:50.109081  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:35:50.189847  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:35:50.407227  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:35:50.407551  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:35:50.609481  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:35:50.689652  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:35:50.906263  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:35:50.906471  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:35:51.109948  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:35:51.183784  574586 node_ready.go:53] node "addons-961822" has status "Ready":"False"
	I0224 12:35:51.189479  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:35:51.406051  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:35:51.406365  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:35:51.609242  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:35:51.689551  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:35:51.905504  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:35:51.906180  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:35:52.109977  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:35:52.189943  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:35:52.406231  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:35:52.406429  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:35:52.609218  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:35:52.689261  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:35:52.906365  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:35:52.906934  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:35:53.109461  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:35:53.184249  574586 node_ready.go:53] node "addons-961822" has status "Ready":"False"
	I0224 12:35:53.189902  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:35:53.406216  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:35:53.406387  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:35:53.609451  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:35:53.710512  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:35:53.905781  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:35:53.906802  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:35:54.109723  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:35:54.189807  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:35:54.406330  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:35:54.406468  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:35:54.610658  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:35:54.689234  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:35:54.905603  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:35:54.906829  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:35:55.109915  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:35:55.193815  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:35:55.406211  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:35:55.406482  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:35:55.609620  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:35:55.686956  574586 node_ready.go:53] node "addons-961822" has status "Ready":"False"
	I0224 12:35:55.689499  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:35:55.906659  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:35:55.906669  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:35:56.109739  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:35:56.189382  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:35:56.405051  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:35:56.406195  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:35:56.609146  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:35:56.690176  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:35:56.905699  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:35:56.905802  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:35:57.109643  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:35:57.190236  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:35:57.404886  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:35:57.406296  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:35:57.609330  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:35:57.690100  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:35:57.906180  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:35:57.906349  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:35:58.109147  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:35:58.184004  574586 node_ready.go:53] node "addons-961822" has status "Ready":"False"
	I0224 12:35:58.189709  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:35:58.405624  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:35:58.407102  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:35:58.612577  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:35:58.694863  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:35:58.906201  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:35:58.906507  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:35:59.109570  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:35:59.189802  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:35:59.405838  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:35:59.406159  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:35:59.609097  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:35:59.690472  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:35:59.905152  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:35:59.906133  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:00.120715  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:00.199450  574586 node_ready.go:53] node "addons-961822" has status "Ready":"False"
	I0224 12:36:00.202345  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:00.408513  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:00.408777  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:00.609998  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:00.690581  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:00.906294  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:00.906687  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:01.109971  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:01.189960  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:01.406232  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:01.406489  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:01.609919  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:01.690440  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:01.906601  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:01.906766  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:02.110119  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:02.189903  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:02.406239  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:02.406903  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:02.609645  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:02.684634  574586 node_ready.go:53] node "addons-961822" has status "Ready":"False"
	I0224 12:36:02.689485  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:02.906251  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:02.906291  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:03.109167  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:03.189437  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:03.405553  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:03.406773  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:03.609849  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:03.690098  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:03.905121  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:03.906158  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:04.108976  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:04.189544  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:04.405937  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:04.407289  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:04.610061  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:04.690416  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:04.905466  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:04.906311  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:05.110088  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:05.183740  574586 node_ready.go:53] node "addons-961822" has status "Ready":"False"
	I0224 12:36:05.189397  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:05.405302  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:05.406435  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:05.609482  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:05.689568  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:05.906391  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:05.906482  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:06.109600  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:06.190042  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:06.406066  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:06.406294  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:06.609380  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:06.689705  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:06.905979  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:06.906125  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:07.109929  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:07.189336  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:07.405601  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:07.406977  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:07.609995  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:07.683444  574586 node_ready.go:53] node "addons-961822" has status "Ready":"False"
	I0224 12:36:07.690290  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:07.905525  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:07.906070  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:08.110131  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:08.190149  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:08.405625  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:08.405851  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:08.609773  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:08.690253  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:08.905074  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:08.906401  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:09.110133  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:09.189504  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:09.406451  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:09.406920  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:09.609819  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:09.685735  574586 node_ready.go:53] node "addons-961822" has status "Ready":"False"
	I0224 12:36:09.700553  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:09.906706  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:09.907125  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:10.110070  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:10.190132  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:10.406463  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:10.406509  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:10.610339  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:10.689656  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:10.906473  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:10.906592  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:11.109995  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:11.190526  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:11.405261  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:11.406399  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:11.609252  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:11.689768  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:11.907901  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:11.909842  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:12.109106  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:12.184067  574586 node_ready.go:53] node "addons-961822" has status "Ready":"False"
	I0224 12:36:12.189784  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:12.406598  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:12.406791  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:12.609661  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:12.689832  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:12.906255  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:12.906623  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:13.109286  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:13.189826  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:13.406273  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:13.406399  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:13.609137  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:13.689911  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:13.906376  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:13.906515  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:14.109568  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:14.184725  574586 node_ready.go:53] node "addons-961822" has status "Ready":"False"
	I0224 12:36:14.190552  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:14.405648  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:14.406386  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:14.609927  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:14.692022  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:14.906089  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:14.906450  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:15.109470  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:15.190214  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:15.405366  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:15.406209  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:15.610200  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:15.689414  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:15.905923  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:15.906066  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:16.110042  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:16.195533  574586 node_ready.go:49] node "addons-961822" has status "Ready":"True"
	I0224 12:36:16.195562  574586 node_ready.go:38] duration metric: took 39.014679796s for node "addons-961822" to be "Ready" ...
	I0224 12:36:16.195574  574586 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0224 12:36:16.228383  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:16.250286  574586 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-fcbkz" in "kube-system" namespace to be "Ready" ...
	I0224 12:36:16.410582  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:16.411126  574586 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0224 12:36:16.411147  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:16.616232  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:16.735088  574586 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0224 12:36:16.735114  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:16.972543  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:16.973267  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:17.109408  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:17.190441  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:17.406857  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:17.407358  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:17.609721  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:17.690568  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:17.914223  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:17.914660  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:18.109988  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:18.190492  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:18.257029  574586 pod_ready.go:93] pod "coredns-668d6bf9bc-fcbkz" in "kube-system" namespace has status "Ready":"True"
	I0224 12:36:18.257095  574586 pod_ready.go:82] duration metric: took 2.006773534s for pod "coredns-668d6bf9bc-fcbkz" in "kube-system" namespace to be "Ready" ...
	I0224 12:36:18.257133  574586 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-961822" in "kube-system" namespace to be "Ready" ...
	I0224 12:36:18.267664  574586 pod_ready.go:93] pod "etcd-addons-961822" in "kube-system" namespace has status "Ready":"True"
	I0224 12:36:18.267734  574586 pod_ready.go:82] duration metric: took 10.563191ms for pod "etcd-addons-961822" in "kube-system" namespace to be "Ready" ...
	I0224 12:36:18.267764  574586 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-961822" in "kube-system" namespace to be "Ready" ...
	I0224 12:36:18.274581  574586 pod_ready.go:93] pod "kube-apiserver-addons-961822" in "kube-system" namespace has status "Ready":"True"
	I0224 12:36:18.274653  574586 pod_ready.go:82] duration metric: took 6.868876ms for pod "kube-apiserver-addons-961822" in "kube-system" namespace to be "Ready" ...
	I0224 12:36:18.274679  574586 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-961822" in "kube-system" namespace to be "Ready" ...
	I0224 12:36:18.287268  574586 pod_ready.go:93] pod "kube-controller-manager-addons-961822" in "kube-system" namespace has status "Ready":"True"
	I0224 12:36:18.287338  574586 pod_ready.go:82] duration metric: took 12.628029ms for pod "kube-controller-manager-addons-961822" in "kube-system" namespace to be "Ready" ...
	I0224 12:36:18.287366  574586 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8xf5m" in "kube-system" namespace to be "Ready" ...
	I0224 12:36:18.304498  574586 pod_ready.go:93] pod "kube-proxy-8xf5m" in "kube-system" namespace has status "Ready":"True"
	I0224 12:36:18.304651  574586 pod_ready.go:82] duration metric: took 17.262959ms for pod "kube-proxy-8xf5m" in "kube-system" namespace to be "Ready" ...
	I0224 12:36:18.304680  574586 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-961822" in "kube-system" namespace to be "Ready" ...
	I0224 12:36:18.407134  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:18.407719  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:18.610896  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:18.654382  574586 pod_ready.go:93] pod "kube-scheduler-addons-961822" in "kube-system" namespace has status "Ready":"True"
	I0224 12:36:18.654452  574586 pod_ready.go:82] duration metric: took 349.743302ms for pod "kube-scheduler-addons-961822" in "kube-system" namespace to be "Ready" ...
	I0224 12:36:18.654479  574586 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-7fbb699795-t2ggm" in "kube-system" namespace to be "Ready" ...
	I0224 12:36:18.690481  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:18.907593  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:18.907900  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:19.110263  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:19.191232  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:19.406648  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:19.406874  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:19.610722  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:19.690981  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:19.907950  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:19.908084  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:20.110606  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:20.191032  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:20.407923  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:20.411326  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:20.616149  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:20.661573  574586 pod_ready.go:103] pod "metrics-server-7fbb699795-t2ggm" in "kube-system" namespace has status "Ready":"False"
	I0224 12:36:20.691089  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:20.912799  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:20.912921  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:21.110043  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:21.212447  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:21.405324  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:21.407590  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:21.612044  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:21.693637  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:21.906304  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:21.907500  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:22.142766  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:22.172883  574586 pod_ready.go:93] pod "metrics-server-7fbb699795-t2ggm" in "kube-system" namespace has status "Ready":"True"
	I0224 12:36:22.172911  574586 pod_ready.go:82] duration metric: took 3.518411993s for pod "metrics-server-7fbb699795-t2ggm" in "kube-system" namespace to be "Ready" ...
	I0224 12:36:22.172932  574586 pod_ready.go:39] duration metric: took 5.977344285s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0224 12:36:22.172954  574586 api_server.go:52] waiting for apiserver process to appear ...
	I0224 12:36:22.173033  574586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 12:36:22.197231  574586 api_server.go:72] duration metric: took 52.843676418s to wait for apiserver process to appear ...
	I0224 12:36:22.197268  574586 api_server.go:88] waiting for apiserver healthz status ...
	I0224 12:36:22.197289  574586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0224 12:36:22.212427  574586 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0224 12:36:22.213522  574586 api_server.go:141] control plane version: v1.32.2
	I0224 12:36:22.213548  574586 api_server.go:131] duration metric: took 16.272047ms to wait for apiserver health ...
	I0224 12:36:22.213556  574586 system_pods.go:43] waiting for kube-system pods to appear ...
	I0224 12:36:22.217249  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:22.218086  574586 system_pods.go:59] 18 kube-system pods found
	I0224 12:36:22.218117  574586 system_pods.go:61] "coredns-668d6bf9bc-fcbkz" [a0dfd7c3-44bc-4cea-a099-f57580aaf593] Running
	I0224 12:36:22.218126  574586 system_pods.go:61] "csi-hostpath-attacher-0" [e62264d6-4b57-4c23-bcd5-e8524c00d97b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0224 12:36:22.218133  574586 system_pods.go:61] "csi-hostpath-resizer-0" [9298cdcf-e9f6-45b6-9ea9-da0f574ecb4e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0224 12:36:22.218149  574586 system_pods.go:61] "csi-hostpathplugin-jqjtz" [6ceff2e6-f80a-473e-aa86-eedb5e283949] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0224 12:36:22.218159  574586 system_pods.go:61] "etcd-addons-961822" [827c6e1b-351f-42ad-a503-d721808ab7be] Running
	I0224 12:36:22.218164  574586 system_pods.go:61] "kindnet-48kmx" [4df9988e-847c-4fe8-822a-422f4b096923] Running
	I0224 12:36:22.218168  574586 system_pods.go:61] "kube-apiserver-addons-961822" [3adefe62-feaf-452c-b2d1-7eb20c453aca] Running
	I0224 12:36:22.218173  574586 system_pods.go:61] "kube-controller-manager-addons-961822" [a3be99f5-f6d2-4022-9712-21e84a6670d4] Running
	I0224 12:36:22.218182  574586 system_pods.go:61] "kube-ingress-dns-minikube" [45af069f-ef77-4261-a1fd-b94406ab9f71] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0224 12:36:22.218186  574586 system_pods.go:61] "kube-proxy-8xf5m" [8c9fd27e-c653-48da-8aa3-792f9e0d1365] Running
	I0224 12:36:22.218191  574586 system_pods.go:61] "kube-scheduler-addons-961822" [a421c24c-7b6d-4ec5-b154-2a47773a399a] Running
	I0224 12:36:22.218197  574586 system_pods.go:61] "metrics-server-7fbb699795-t2ggm" [42d3fd5b-ac3a-415a-8cf4-8db80e497487] Running
	I0224 12:36:22.218204  574586 system_pods.go:61] "nvidia-device-plugin-daemonset-c8kxf" [0fd34bf3-198a-4041-9a8e-8ff90d9b3dc1] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0224 12:36:22.218210  574586 system_pods.go:61] "registry-6c88467877-m7xss" [fda94111-471e-47d3-9d3d-56111eaa4f83] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0224 12:36:22.218218  574586 system_pods.go:61] "registry-proxy-p98ck" [6fdaebf5-b82c-41a1-9680-3a1085f207c3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0224 12:36:22.218224  574586 system_pods.go:61] "snapshot-controller-68b874b76f-487k5" [fe26ec00-41e9-4dad-bb9c-eb78dd40834b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0224 12:36:22.218233  574586 system_pods.go:61] "snapshot-controller-68b874b76f-ckp9n" [96f75e8c-7a5a-43a1-89a1-153f131e1b29] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0224 12:36:22.218240  574586 system_pods.go:61] "storage-provisioner" [cc19263c-9bba-4a1f-a514-2a4979b07f8e] Running
	I0224 12:36:22.218245  574586 system_pods.go:74] duration metric: took 4.683555ms to wait for pod list to return data ...
	I0224 12:36:22.218252  574586 default_sa.go:34] waiting for default service account to be created ...
	I0224 12:36:22.253936  574586 default_sa.go:45] found service account: "default"
	I0224 12:36:22.253962  574586 default_sa.go:55] duration metric: took 35.701194ms for default service account to be created ...
	I0224 12:36:22.253975  574586 system_pods.go:116] waiting for k8s-apps to be running ...
	I0224 12:36:22.406231  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:22.406667  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:22.455105  574586 system_pods.go:86] 18 kube-system pods found
	I0224 12:36:22.455181  574586 system_pods.go:89] "coredns-668d6bf9bc-fcbkz" [a0dfd7c3-44bc-4cea-a099-f57580aaf593] Running
	I0224 12:36:22.455200  574586 system_pods.go:89] "csi-hostpath-attacher-0" [e62264d6-4b57-4c23-bcd5-e8524c00d97b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0224 12:36:22.455210  574586 system_pods.go:89] "csi-hostpath-resizer-0" [9298cdcf-e9f6-45b6-9ea9-da0f574ecb4e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0224 12:36:22.455220  574586 system_pods.go:89] "csi-hostpathplugin-jqjtz" [6ceff2e6-f80a-473e-aa86-eedb5e283949] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0224 12:36:22.455230  574586 system_pods.go:89] "etcd-addons-961822" [827c6e1b-351f-42ad-a503-d721808ab7be] Running
	I0224 12:36:22.455235  574586 system_pods.go:89] "kindnet-48kmx" [4df9988e-847c-4fe8-822a-422f4b096923] Running
	I0224 12:36:22.455265  574586 system_pods.go:89] "kube-apiserver-addons-961822" [3adefe62-feaf-452c-b2d1-7eb20c453aca] Running
	I0224 12:36:22.455270  574586 system_pods.go:89] "kube-controller-manager-addons-961822" [a3be99f5-f6d2-4022-9712-21e84a6670d4] Running
	I0224 12:36:22.455277  574586 system_pods.go:89] "kube-ingress-dns-minikube" [45af069f-ef77-4261-a1fd-b94406ab9f71] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0224 12:36:22.455281  574586 system_pods.go:89] "kube-proxy-8xf5m" [8c9fd27e-c653-48da-8aa3-792f9e0d1365] Running
	I0224 12:36:22.455287  574586 system_pods.go:89] "kube-scheduler-addons-961822" [a421c24c-7b6d-4ec5-b154-2a47773a399a] Running
	I0224 12:36:22.455291  574586 system_pods.go:89] "metrics-server-7fbb699795-t2ggm" [42d3fd5b-ac3a-415a-8cf4-8db80e497487] Running
	I0224 12:36:22.455303  574586 system_pods.go:89] "nvidia-device-plugin-daemonset-c8kxf" [0fd34bf3-198a-4041-9a8e-8ff90d9b3dc1] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0224 12:36:22.455310  574586 system_pods.go:89] "registry-6c88467877-m7xss" [fda94111-471e-47d3-9d3d-56111eaa4f83] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0224 12:36:22.455318  574586 system_pods.go:89] "registry-proxy-p98ck" [6fdaebf5-b82c-41a1-9680-3a1085f207c3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0224 12:36:22.455328  574586 system_pods.go:89] "snapshot-controller-68b874b76f-487k5" [fe26ec00-41e9-4dad-bb9c-eb78dd40834b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0224 12:36:22.455334  574586 system_pods.go:89] "snapshot-controller-68b874b76f-ckp9n" [96f75e8c-7a5a-43a1-89a1-153f131e1b29] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0224 12:36:22.455343  574586 system_pods.go:89] "storage-provisioner" [cc19263c-9bba-4a1f-a514-2a4979b07f8e] Running
	I0224 12:36:22.455350  574586 system_pods.go:126] duration metric: took 201.36992ms to wait for k8s-apps to be running ...
	I0224 12:36:22.455358  574586 system_svc.go:44] waiting for kubelet service to be running ....
	I0224 12:36:22.455422  574586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0224 12:36:22.467891  574586 system_svc.go:56] duration metric: took 12.522495ms WaitForService to wait for kubelet
	I0224 12:36:22.467921  574586 kubeadm.go:582] duration metric: took 53.114371687s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0224 12:36:22.467940  574586 node_conditions.go:102] verifying NodePressure condition ...
	I0224 12:36:22.609715  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:22.654625  574586 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0224 12:36:22.654656  574586 node_conditions.go:123] node cpu capacity is 2
	I0224 12:36:22.654669  574586 node_conditions.go:105] duration metric: took 186.723707ms to run NodePressure ...
	I0224 12:36:22.654683  574586 start.go:241] waiting for startup goroutines ...
	I0224 12:36:22.689879  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:22.906594  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:22.906820  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:23.109599  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:23.192163  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:23.409835  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:23.412217  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:23.610247  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:23.694332  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:23.914295  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:23.915097  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:24.110167  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:24.193165  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:24.409704  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:24.409840  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:24.610210  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:24.695791  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:24.907067  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:24.908736  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:25.110293  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:25.190286  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:25.405899  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:25.406058  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:25.610193  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:25.690307  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:25.905582  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:25.907019  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:26.110689  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:26.190844  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:26.406394  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:26.406535  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:26.609934  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:26.690438  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:26.907731  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:26.907919  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:27.114088  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:27.191270  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:27.408149  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:27.409521  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:27.610147  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:27.691565  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:27.908369  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:27.908814  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:28.110213  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:28.190959  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:28.406478  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:28.408435  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:28.609971  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:28.690821  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:28.907161  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:28.908189  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:29.126158  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:29.209874  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:29.407228  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:29.407665  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:29.610329  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:29.724475  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:29.906680  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:29.907161  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:30.110365  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:30.190856  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:30.405737  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:30.406435  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:30.609474  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:30.690777  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:30.907075  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:30.907324  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:31.109445  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:31.190937  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:31.407897  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:31.408105  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:31.610306  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:31.690579  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:31.908179  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:31.908966  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:32.109998  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:32.190930  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:32.410695  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:32.411220  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:32.610107  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:32.694434  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:32.908884  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:32.909475  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:33.111991  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:33.191891  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:33.416574  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:33.417215  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:33.609257  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:33.690244  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:33.937869  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:33.938044  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:34.110225  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:34.191480  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:34.407203  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:34.407388  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:34.610306  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:34.691166  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:34.908163  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:34.908731  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:35.110411  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:35.191254  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:35.406387  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:35.408537  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:35.609841  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:35.690320  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:35.908856  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:36.010133  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:36.109940  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:36.190221  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:36.406352  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:36.406445  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:36.609147  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:36.690710  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:36.906781  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:36.907424  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:37.110381  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:37.190419  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:37.407020  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:37.407200  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:37.609218  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:37.690207  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:37.905262  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:37.906763  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:38.109660  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:38.190536  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:38.406106  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:38.409198  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:38.609126  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:38.690809  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:38.911566  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:38.911709  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:39.110128  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:39.190186  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:39.418244  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:39.418949  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:39.610093  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:39.690490  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:39.907782  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:39.907984  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:40.110782  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:40.190091  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:40.406397  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:40.406509  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:40.609750  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:40.689766  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:40.906439  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:40.906807  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:41.109720  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:41.189825  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:41.406331  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:41.406792  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:41.609394  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:41.690670  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:41.905600  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:41.907495  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:42.119858  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:42.191006  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:42.408738  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:42.409408  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:42.609664  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:42.690377  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:42.907923  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:42.908292  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:43.110661  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:43.190066  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:43.407367  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:43.407603  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:43.609457  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:43.693755  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:43.908946  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:43.911554  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:44.109646  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:44.190793  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:44.406380  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:44.407986  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:44.610617  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:44.690502  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:44.912925  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:44.913368  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:45.110517  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:45.191951  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:45.408529  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:45.408912  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:45.611144  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:45.700907  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:45.907982  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:45.908130  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:46.110285  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:46.190756  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:46.406952  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:46.407901  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:46.610283  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:46.690504  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:46.907058  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:46.907278  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:47.110296  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:47.190587  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:47.416892  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:47.419844  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:47.611969  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:47.692274  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:47.910623  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:47.911388  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:48.113718  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:48.189723  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:48.408550  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:48.408976  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:48.610280  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:48.690487  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:48.905221  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:48.909027  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:49.110523  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:49.210955  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:49.407328  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 12:36:49.407495  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:49.609759  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:49.689818  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:49.906145  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:49.906211  574586 kapi.go:107] duration metric: took 1m13.00403126s to wait for kubernetes.io/minikube-addons=registry ...
	I0224 12:36:50.109767  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:50.189889  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:50.405804  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:50.610160  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:50.692119  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:50.907086  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:51.112027  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:51.192150  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:51.413452  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:51.609776  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:51.690528  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:51.906441  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:52.109704  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:52.192813  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:52.407780  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:52.609422  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:52.690459  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:52.906836  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:53.109656  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:53.189858  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:53.407710  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:53.610530  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:53.696188  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:53.908630  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:54.109468  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:54.190782  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:54.406921  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:54.612862  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:54.691486  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:54.907407  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:55.113068  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:55.190286  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:55.407661  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:55.610060  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:55.697651  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:55.907436  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:56.110544  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:56.191081  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:56.407612  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:56.609997  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:56.689919  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:56.906314  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:57.109182  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:57.192066  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:57.407351  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:57.610242  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:57.691077  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:57.906198  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:58.109889  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:58.190650  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:58.406672  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:58.610007  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:58.712549  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:58.906919  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:59.109916  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:59.190056  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:59.407501  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:36:59.610873  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:36:59.706116  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:36:59.908605  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:37:00.111181  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:37:00.192367  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:37:00.407529  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:37:00.625114  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:37:00.690917  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:37:00.911864  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:37:01.110959  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:37:01.190704  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:37:01.409140  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:37:01.609778  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:37:01.692088  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:37:01.906878  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:37:02.110256  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:37:02.190814  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:37:02.411147  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:37:02.609820  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:37:02.690097  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:37:02.906372  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:37:03.110196  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:37:03.190272  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:37:03.406521  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:37:03.610103  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:37:03.694429  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:37:03.911856  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:37:04.110138  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:37:04.191037  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:37:04.406311  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:37:04.610805  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:37:04.690587  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:37:04.906565  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:37:05.109780  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:37:05.189851  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:37:05.410898  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:37:05.610331  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:37:05.690222  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:37:05.909468  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:37:06.109885  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:37:06.189660  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:37:06.406622  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:37:06.612235  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:37:06.690301  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:37:06.907468  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:37:07.113294  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:37:07.211956  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:37:07.407470  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:37:07.609639  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:37:07.689499  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:37:07.906782  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:37:08.111363  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:37:08.191573  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:37:08.406817  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:37:08.610451  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:37:08.712890  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:37:08.910580  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:37:09.110064  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:37:09.202640  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:37:09.407298  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:37:09.610386  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:37:09.691221  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:37:09.906492  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:37:10.109891  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:37:10.190207  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:37:10.408749  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:37:10.610586  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:37:10.693638  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:37:10.907117  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:37:11.110249  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:37:11.191121  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:37:11.407457  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:37:11.610092  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:37:11.690137  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:37:11.906255  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:37:12.110971  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:37:12.193572  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:37:12.407339  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:37:12.610176  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:37:12.690976  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:37:12.915177  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:37:13.109225  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:37:13.190235  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:37:13.406253  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:37:13.616119  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:37:13.691435  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:37:13.907032  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:37:14.110376  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:37:14.190577  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:37:14.407169  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:37:14.609354  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:37:14.690368  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:37:14.906335  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:37:15.109505  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:37:15.190365  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:37:15.407321  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:37:15.610246  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:37:15.690992  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:37:15.907336  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:37:16.109982  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:37:16.190854  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:37:16.407499  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:37:16.609288  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:37:16.690104  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:37:16.907190  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:37:17.108998  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:37:17.190846  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:37:17.407727  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:37:17.609772  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:37:17.691006  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:37:17.909246  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:37:18.112523  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:37:18.191477  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:37:18.411433  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:37:18.609636  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:37:18.690549  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:37:18.906543  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:37:19.109595  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:37:19.190984  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:37:19.406795  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:37:19.609785  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:37:19.689962  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:37:19.907135  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:37:20.110610  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:37:20.190939  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:37:20.407873  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:37:20.610091  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:37:20.690163  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:37:20.908766  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:37:21.111903  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:37:21.190165  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:37:21.406601  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:37:21.609529  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:37:21.691854  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:37:21.907006  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:37:22.110025  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:37:22.190060  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:37:22.407400  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:37:22.610415  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:37:22.691759  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:37:22.907771  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:37:23.110384  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:37:23.190911  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:37:23.414993  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:37:23.609970  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:37:23.692748  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 12:37:23.907307  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:37:24.109973  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:37:24.190105  574586 kapi.go:107] duration metric: took 1m47.003374308s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0224 12:37:24.406081  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:37:24.609750  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:37:24.906723  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:37:25.109758  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:37:25.406696  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:37:25.609745  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:37:25.907374  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:37:26.109853  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:37:26.406452  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:37:26.609899  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:37:26.906801  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:37:27.110178  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:37:27.407092  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:37:27.610508  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:37:27.917686  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:37:28.109997  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:37:28.407341  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:37:28.609778  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:37:28.906986  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:37:29.110084  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:37:29.405860  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:37:29.610344  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:37:29.910883  574586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 12:37:30.136099  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:37:30.407432  574586 kapi.go:107] duration metric: took 1m53.50429534s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0224 12:37:30.609429  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:37:31.109900  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:37:31.609528  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:37:32.111717  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:37:32.610856  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:37:33.109411  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:37:33.610827  574586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 12:37:34.118929  574586 kapi.go:107] duration metric: took 1m52.512589756s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0224 12:37:34.122643  574586 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-961822 cluster.
	I0224 12:37:34.125504  574586 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0224 12:37:34.128365  574586 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0224 12:37:34.131389  574586 out.go:177] * Enabled addons: cloud-spanner, amd-gpu-device-plugin, ingress-dns, storage-provisioner, inspektor-gadget, nvidia-device-plugin, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0224 12:37:34.134263  574586 addons.go:514] duration metric: took 2m4.780304419s for enable addons: enabled=[cloud-spanner amd-gpu-device-plugin ingress-dns storage-provisioner inspektor-gadget nvidia-device-plugin metrics-server yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0224 12:37:34.134307  574586 start.go:246] waiting for cluster config update ...
	I0224 12:37:34.134332  574586 start.go:255] writing updated cluster config ...
	I0224 12:37:34.134649  574586 ssh_runner.go:195] Run: rm -f paused
	I0224 12:37:34.533430  574586 start.go:600] kubectl: 1.32.2, cluster: 1.32.2 (minor skew: 0)
	I0224 12:37:34.536371  574586 out.go:177] * Done! kubectl is now configured to use "addons-961822" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Feb 24 12:40:25 addons-961822 crio[955]: time="2025-02-24 12:40:25.090713081Z" level=info msg="Removed pod sandbox: 524dbdcd97da1de0dd42a8b80ea126198605123ce3a48695afc3429fe2b3c319" id=8dc743f8-37f6-4592-a2d4-cd0d1fdfde0f name=/runtime.v1.RuntimeService/RemovePodSandbox
	Feb 24 12:40:52 addons-961822 crio[955]: time="2025-02-24 12:40:52.041424388Z" level=info msg="Running pod sandbox: default/hello-world-app-7d9564db4-w6sm6/POD" id=05dc3dd9-d657-43a1-a9d9-881e48bc4635 name=/runtime.v1.RuntimeService/RunPodSandbox
	Feb 24 12:40:52 addons-961822 crio[955]: time="2025-02-24 12:40:52.041487280Z" level=warning msg="Allowed annotations are specified for workload []"
	Feb 24 12:40:52 addons-961822 crio[955]: time="2025-02-24 12:40:52.083311005Z" level=info msg="Got pod network &{Name:hello-world-app-7d9564db4-w6sm6 Namespace:default ID:f3da3827deeab2192321b3e039767d7544ce01ba2e56c7dd9c2ecab6558a0438 UID:c17d3e27-4747-4c5a-bbb3-2d0ea71f88f5 NetNS:/var/run/netns/47ba19be-1f8c-4a2d-9752-6716ceec3030 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Feb 24 12:40:52 addons-961822 crio[955]: time="2025-02-24 12:40:52.083366398Z" level=info msg="Adding pod default_hello-world-app-7d9564db4-w6sm6 to CNI network \"kindnet\" (type=ptp)"
	Feb 24 12:40:52 addons-961822 crio[955]: time="2025-02-24 12:40:52.094670213Z" level=info msg="Got pod network &{Name:hello-world-app-7d9564db4-w6sm6 Namespace:default ID:f3da3827deeab2192321b3e039767d7544ce01ba2e56c7dd9c2ecab6558a0438 UID:c17d3e27-4747-4c5a-bbb3-2d0ea71f88f5 NetNS:/var/run/netns/47ba19be-1f8c-4a2d-9752-6716ceec3030 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Feb 24 12:40:52 addons-961822 crio[955]: time="2025-02-24 12:40:52.094827497Z" level=info msg="Checking pod default_hello-world-app-7d9564db4-w6sm6 for CNI network kindnet (type=ptp)"
	Feb 24 12:40:52 addons-961822 crio[955]: time="2025-02-24 12:40:52.097385888Z" level=info msg="Ran pod sandbox f3da3827deeab2192321b3e039767d7544ce01ba2e56c7dd9c2ecab6558a0438 with infra container: default/hello-world-app-7d9564db4-w6sm6/POD" id=05dc3dd9-d657-43a1-a9d9-881e48bc4635 name=/runtime.v1.RuntimeService/RunPodSandbox
	Feb 24 12:40:52 addons-961822 crio[955]: time="2025-02-24 12:40:52.100753024Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=e0053c45-0249-4992-845a-3bb5d76495a7 name=/runtime.v1.ImageService/ImageStatus
	Feb 24 12:40:52 addons-961822 crio[955]: time="2025-02-24 12:40:52.100967678Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=e0053c45-0249-4992-845a-3bb5d76495a7 name=/runtime.v1.ImageService/ImageStatus
	Feb 24 12:40:52 addons-961822 crio[955]: time="2025-02-24 12:40:52.104305923Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=82ccba8b-72e3-487a-bf35-72d94bfd7867 name=/runtime.v1.ImageService/PullImage
	Feb 24 12:40:52 addons-961822 crio[955]: time="2025-02-24 12:40:52.107126960Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Feb 24 12:40:52 addons-961822 crio[955]: time="2025-02-24 12:40:52.391462858Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Feb 24 12:40:53 addons-961822 crio[955]: time="2025-02-24 12:40:53.256390330Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6" id=82ccba8b-72e3-487a-bf35-72d94bfd7867 name=/runtime.v1.ImageService/PullImage
	Feb 24 12:40:53 addons-961822 crio[955]: time="2025-02-24 12:40:53.257287517Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=acf506eb-2aeb-492b-b301-69cd6b662979 name=/runtime.v1.ImageService/ImageStatus
	Feb 24 12:40:53 addons-961822 crio[955]: time="2025-02-24 12:40:53.257915913Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b],Size_:4789170,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=acf506eb-2aeb-492b-b301-69cd6b662979 name=/runtime.v1.ImageService/ImageStatus
	Feb 24 12:40:53 addons-961822 crio[955]: time="2025-02-24 12:40:53.260298076Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=7c9110af-3604-41ff-a47b-fadd8cea7ff3 name=/runtime.v1.ImageService/ImageStatus
	Feb 24 12:40:53 addons-961822 crio[955]: time="2025-02-24 12:40:53.260924806Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b],Size_:4789170,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=7c9110af-3604-41ff-a47b-fadd8cea7ff3 name=/runtime.v1.ImageService/ImageStatus
	Feb 24 12:40:53 addons-961822 crio[955]: time="2025-02-24 12:40:53.261775946Z" level=info msg="Creating container: default/hello-world-app-7d9564db4-w6sm6/hello-world-app" id=ff8b90e6-db2d-41dc-8943-ec2d6fbac66e name=/runtime.v1.RuntimeService/CreateContainer
	Feb 24 12:40:53 addons-961822 crio[955]: time="2025-02-24 12:40:53.261865283Z" level=warning msg="Allowed annotations are specified for workload []"
	Feb 24 12:40:53 addons-961822 crio[955]: time="2025-02-24 12:40:53.290726883Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/30d190b2b956e8fa1cb3a05d166e27f7a92c132b86e504ffdb2547d21da4c280/merged/etc/passwd: no such file or directory"
	Feb 24 12:40:53 addons-961822 crio[955]: time="2025-02-24 12:40:53.290775983Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/30d190b2b956e8fa1cb3a05d166e27f7a92c132b86e504ffdb2547d21da4c280/merged/etc/group: no such file or directory"
	Feb 24 12:40:53 addons-961822 crio[955]: time="2025-02-24 12:40:53.342819014Z" level=info msg="Created container 94426bdcf84cac0058519ff4ffabc57eda2af9f4c92941b83ae1e3fcefc14555: default/hello-world-app-7d9564db4-w6sm6/hello-world-app" id=ff8b90e6-db2d-41dc-8943-ec2d6fbac66e name=/runtime.v1.RuntimeService/CreateContainer
	Feb 24 12:40:53 addons-961822 crio[955]: time="2025-02-24 12:40:53.343837956Z" level=info msg="Starting container: 94426bdcf84cac0058519ff4ffabc57eda2af9f4c92941b83ae1e3fcefc14555" id=e14fb9d7-710d-4b7f-ad7d-8fc03247c322 name=/runtime.v1.RuntimeService/StartContainer
	Feb 24 12:40:53 addons-961822 crio[955]: time="2025-02-24 12:40:53.356489468Z" level=info msg="Started container" PID=8224 containerID=94426bdcf84cac0058519ff4ffabc57eda2af9f4c92941b83ae1e3fcefc14555 description=default/hello-world-app-7d9564db4-w6sm6/hello-world-app id=e14fb9d7-710d-4b7f-ad7d-8fc03247c322 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f3da3827deeab2192321b3e039767d7544ce01ba2e56c7dd9c2ecab6558a0438
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD
	94426bdcf84ca       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        Less than a second ago   Running             hello-world-app           0                   f3da3827deeab       hello-world-app-7d9564db4-w6sm6
	e6bf90d25a1b0       docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591                              2 minutes ago            Running             nginx                     0                   137a05e439143       nginx
	791135e3f0841       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago            Running             busybox                   0                   3e2e65fe8b3ea       busybox
	edcfea93c93e8       registry.k8s.io/ingress-nginx/controller@sha256:787a5408fa511266888b2e765f9666bee67d9bf2518a6b7cfd4ab6cc01c22eee             3 minutes ago            Running             controller                0                   6a453b01b96dc       ingress-nginx-controller-56d7c84fd4-fnh7v
	284b104ad054c       d54655ed3a8543a162b688a24bf969ee1a28d906b8ccb30188059247efdae234                                                             3 minutes ago            Exited              patch                     1                   4aa0da7012747       ingress-nginx-admission-patch-rlz8d
	385ce8f0e1b01       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:0550b75a965592f1dde3fbeaa98f67a1e10c5a086bcd69a29054cc4edcb56771   3 minutes ago            Exited              create                    0                   baa8fa5cf09d6       ingress-nginx-admission-create-pljqk
	69fadaa2b45c7       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c             4 minutes ago            Running             minikube-ingress-dns      0                   f6f389f92e09f       kube-ingress-dns-minikube
	e4d4d7d0f9b22       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4                                                             4 minutes ago            Running             coredns                   0                   d0389b5a6ad81       coredns-668d6bf9bc-fcbkz
	438fd90059a59       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             4 minutes ago            Running             storage-provisioner       0                   acbd42cb53265       storage-provisioner
	87ce6834e71fc       docker.io/kindest/kindnetd@sha256:86c933f3845d6a993c8f64632752b10aae67a4756c59096b3259426e839be955                           5 minutes ago            Running             kindnet-cni               0                   a399fe24b5e65       kindnet-48kmx
	e981206088e51       e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062                                                             5 minutes ago            Running             kube-proxy                0                   b9c82a81956f0       kube-proxy-8xf5m
	ac22f6633efb1       7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82                                                             5 minutes ago            Running             etcd                      0                   d8cd2707ba602       etcd-addons-961822
	29242c5874bbb       6417e1437b6d9a789e1ca789695a574e1df00a632bdbfbcae9695c9a7d500e32                                                             5 minutes ago            Running             kube-apiserver            0                   6b9953d28be7d       kube-apiserver-addons-961822
	8535581458ee5       3c9285acfd2ff7915bb451cc40ac060366ac519f3fef00c455f5aca0e0346c4d                                                             5 minutes ago            Running             kube-controller-manager   0                   f55386858257f       kube-controller-manager-addons-961822
	498e3f2e24286       82dfa03f692fb5d84f66c17d6ee9126b081182152b25d28ea456d89b7d5d8911                                                             5 minutes ago            Running             kube-scheduler            0                   ebdac8ee3ff21       kube-scheduler-addons-961822
	
	
	==> coredns [e4d4d7d0f9b22b545b39c771a2543f4549654e24c79d57a05e14c8f9c8ad804a] <==
	[INFO] 10.244.0.6:52361 - 10001 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.001486526s
	[INFO] 10.244.0.6:52361 - 59279 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000119819s
	[INFO] 10.244.0.6:52361 - 53923 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000105329s
	[INFO] 10.244.0.6:41153 - 51322 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000191318s
	[INFO] 10.244.0.6:41153 - 51544 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000269874s
	[INFO] 10.244.0.6:39531 - 47600 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00011712s
	[INFO] 10.244.0.6:39531 - 47106 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00012091s
	[INFO] 10.244.0.6:50849 - 32364 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000109317s
	[INFO] 10.244.0.6:50849 - 32167 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000172208s
	[INFO] 10.244.0.6:41188 - 13977 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001407528s
	[INFO] 10.244.0.6:41188 - 14182 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001488045s
	[INFO] 10.244.0.6:37873 - 3658 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000136598s
	[INFO] 10.244.0.6:37873 - 3502 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00018345s
	[INFO] 10.244.0.21:44540 - 46746 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000185411s
	[INFO] 10.244.0.21:52736 - 15386 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000165611s
	[INFO] 10.244.0.21:57478 - 49915 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000136122s
	[INFO] 10.244.0.21:50775 - 48480 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000126924s
	[INFO] 10.244.0.21:48487 - 31887 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000101579s
	[INFO] 10.244.0.21:42199 - 18684 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000122552s
	[INFO] 10.244.0.21:46796 - 65409 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002757849s
	[INFO] 10.244.0.21:39376 - 15316 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002529239s
	[INFO] 10.244.0.21:38552 - 36259 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001697898s
	[INFO] 10.244.0.21:52517 - 23014 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.002261605s
	[INFO] 10.244.0.24:40714 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000217886s
	[INFO] 10.244.0.24:47932 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000146896s
	
	
	==> describe nodes <==
	Name:               addons-961822
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-961822
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b76650f53499dbb51707efa4a87e94b72d747650
	                    minikube.k8s.io/name=addons-961822
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_02_24T12_35_25_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-961822
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Feb 2025 12:35:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-961822
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Feb 2025 12:40:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Feb 2025 12:39:29 +0000   Mon, 24 Feb 2025 12:35:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Feb 2025 12:39:29 +0000   Mon, 24 Feb 2025 12:35:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Feb 2025 12:39:29 +0000   Mon, 24 Feb 2025 12:35:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Feb 2025 12:39:29 +0000   Mon, 24 Feb 2025 12:36:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-961822
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 5316c6b6f9f94ab0a58a721ab1b2da57
	  System UUID:                e07dd53e-1713-495f-a528-7ebcc93fd7f6
	  Boot ID:                    6f34eb9c-4f91-46ff-b0f4-46d0ee23f33c
	  Kernel Version:             5.15.0-1077-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m18s
	  default                     hello-world-app-7d9564db4-w6sm6              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  ingress-nginx               ingress-nginx-controller-56d7c84fd4-fnh7v    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         5m17s
	  kube-system                 coredns-668d6bf9bc-fcbkz                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     5m23s
	  kube-system                 etcd-addons-961822                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         5m29s
	  kube-system                 kindnet-48kmx                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      5m24s
	  kube-system                 kube-apiserver-addons-961822                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m29s
	  kube-system                 kube-controller-manager-addons-961822        200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m29s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m18s
	  kube-system                 kube-proxy-8xf5m                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m24s
	  kube-system                 kube-scheduler-addons-961822                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m29s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             310Mi (3%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5m17s                  kube-proxy       
	  Normal   Starting                 5m36s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m36s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m36s (x8 over 5m36s)  kubelet          Node addons-961822 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m36s (x8 over 5m36s)  kubelet          Node addons-961822 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m36s (x8 over 5m36s)  kubelet          Node addons-961822 status is now: NodeHasSufficientPID
	  Normal   Starting                 5m29s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m29s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m29s                  kubelet          Node addons-961822 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m29s                  kubelet          Node addons-961822 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m29s                  kubelet          Node addons-961822 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m25s                  node-controller  Node addons-961822 event: Registered Node addons-961822 in Controller
	  Normal   NodeReady                4m37s                  kubelet          Node addons-961822 status is now: NodeReady
	
	
	==> dmesg <==
	[Feb24 10:44] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Feb24 12:05] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	
	
	==> etcd [ac22f6633efb18546081c666e4194ad2c7633eff56bb5535a5b8c6ffd28ca2e1] <==
	{"level":"info","ts":"2025-02-24T12:35:33.469760Z","caller":"traceutil/trace.go:171","msg":"trace[1330251941] range","detail":"{range_begin:/registry/deployments/kube-system/registry; range_end:; response_count:0; response_revision:385; }","duration":"199.65581ms","start":"2025-02-24T12:35:33.270087Z","end":"2025-02-24T12:35:33.469742Z","steps":["trace[1330251941] 'agreement among raft nodes before linearized reading'  (duration: 199.113117ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-24T12:35:33.470919Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"158.629924ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-02-24T12:35:33.470968Z","caller":"traceutil/trace.go:171","msg":"trace[1360316337] range","detail":"{range_begin:/registry/serviceaccounts; range_end:; response_count:0; response_revision:385; }","duration":"158.689838ms","start":"2025-02-24T12:35:33.312269Z","end":"2025-02-24T12:35:33.470958Z","steps":["trace[1360316337] 'agreement among raft nodes before linearized reading'  (duration: 158.606482ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-24T12:35:33.471125Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.418526ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets/kube-system/coredns-668d6bf9bc\" limit:1 ","response":"range_response_count:1 size:3752"}
	{"level":"info","ts":"2025-02-24T12:35:33.471155Z","caller":"traceutil/trace.go:171","msg":"trace[1247726686] range","detail":"{range_begin:/registry/replicasets/kube-system/coredns-668d6bf9bc; range_end:; response_count:1; response_revision:385; }","duration":"105.452183ms","start":"2025-02-24T12:35:33.365696Z","end":"2025-02-24T12:35:33.471148Z","steps":["trace[1247726686] 'agreement among raft nodes before linearized reading'  (duration: 105.39108ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-24T12:35:33.473892Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"161.449163ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-02-24T12:35:33.473950Z","caller":"traceutil/trace.go:171","msg":"trace[174250318] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/storage-provisioner; range_end:; response_count:0; response_revision:385; }","duration":"161.526957ms","start":"2025-02-24T12:35:33.312411Z","end":"2025-02-24T12:35:33.473938Z","steps":["trace[174250318] 'agreement among raft nodes before linearized reading'  (duration: 161.415851ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-24T12:35:33.474125Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"161.764395ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/yakd-dashboard\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-02-24T12:35:33.474153Z","caller":"traceutil/trace.go:171","msg":"trace[438611147] range","detail":"{range_begin:/registry/namespaces/yakd-dashboard; range_end:; response_count:0; response_revision:385; }","duration":"161.795452ms","start":"2025-02-24T12:35:33.312351Z","end":"2025-02-24T12:35:33.474147Z","steps":["trace[438611147] 'agreement among raft nodes before linearized reading'  (duration: 161.746583ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-24T12:35:33.474273Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"161.927103ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/minikube-ingress-dns\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-02-24T12:35:33.474299Z","caller":"traceutil/trace.go:171","msg":"trace[1394660530] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/minikube-ingress-dns; range_end:; response_count:0; response_revision:385; }","duration":"161.955755ms","start":"2025-02-24T12:35:33.312337Z","end":"2025-02-24T12:35:33.474292Z","steps":["trace[1394660530] 'agreement among raft nodes before linearized reading'  (duration: 161.915714ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-24T12:35:33.474409Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"162.110503ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" limit:1 ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2025-02-24T12:35:33.474433Z","caller":"traceutil/trace.go:171","msg":"trace[1809740796] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:385; }","duration":"162.138253ms","start":"2025-02-24T12:35:33.312289Z","end":"2025-02-24T12:35:33.474427Z","steps":["trace[1809740796] 'agreement among raft nodes before linearized reading'  (duration: 162.08606ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-24T12:35:35.078097Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.353068ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/ingress-nginx\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-02-24T12:35:35.078255Z","caller":"traceutil/trace.go:171","msg":"trace[907793942] range","detail":"{range_begin:/registry/namespaces/ingress-nginx; range_end:; response_count:0; response_revision:429; }","duration":"109.562233ms","start":"2025-02-24T12:35:34.968665Z","end":"2025-02-24T12:35:35.078227Z","steps":["trace[907793942] 'agreement among raft nodes before linearized reading'  (duration: 62.386106ms)","trace[907793942] 'range keys from in-memory index tree'  (duration: 46.942561ms)"],"step_count":2}
	{"level":"info","ts":"2025-02-24T12:35:35.096791Z","caller":"traceutil/trace.go:171","msg":"trace[1434034740] transaction","detail":"{read_only:false; response_revision:430; number_of_response:1; }","duration":"127.01775ms","start":"2025-02-24T12:35:34.969088Z","end":"2025-02-24T12:35:35.096106Z","steps":["trace[1434034740] 'process raft request'  (duration: 54.028889ms)","trace[1434034740] 'compare'  (duration: 70.888232ms)"],"step_count":2}
	{"level":"info","ts":"2025-02-24T12:35:35.097020Z","caller":"traceutil/trace.go:171","msg":"trace[1290296712] linearizableReadLoop","detail":"{readStateIndex:443; appliedIndex:442; }","duration":"113.128367ms","start":"2025-02-24T12:35:34.983881Z","end":"2025-02-24T12:35:35.097010Z","steps":["trace[1290296712] 'read index received'  (duration: 39.161474ms)","trace[1290296712] 'applied index is now lower than readState.Index'  (duration: 73.96572ms)"],"step_count":2}
	{"level":"warn","ts":"2025-02-24T12:35:35.097090Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.348929ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourcequotas\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-02-24T12:35:35.116925Z","caller":"traceutil/trace.go:171","msg":"trace[1729923685] range","detail":"{range_begin:/registry/resourcequotas; range_end:; response_count:0; response_revision:431; }","duration":"148.196283ms","start":"2025-02-24T12:35:34.968710Z","end":"2025-02-24T12:35:35.116906Z","steps":["trace[1729923685] 'agreement among raft nodes before linearized reading'  (duration: 128.328655ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-24T12:35:35.097281Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.25718ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-02-24T12:35:35.117280Z","caller":"traceutil/trace.go:171","msg":"trace[144548530] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:431; }","duration":"148.25193ms","start":"2025-02-24T12:35:34.969016Z","end":"2025-02-24T12:35:35.117268Z","steps":["trace[144548530] 'agreement among raft nodes before linearized reading'  (duration: 128.242394ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-24T12:35:35.097381Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.393508ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ranges/serviceips\" limit:1 ","response":"range_response_count:1 size:93144"}
	{"level":"info","ts":"2025-02-24T12:35:35.117514Z","caller":"traceutil/trace.go:171","msg":"trace[165602964] range","detail":"{range_begin:/registry/ranges/serviceips; range_end:; response_count:1; response_revision:431; }","duration":"148.519966ms","start":"2025-02-24T12:35:34.968981Z","end":"2025-02-24T12:35:35.117501Z","steps":["trace[165602964] 'agreement among raft nodes before linearized reading'  (duration: 128.31757ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-24T12:35:35.135877Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.102181ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" limit:1 ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2025-02-24T12:35:35.136024Z","caller":"traceutil/trace.go:171","msg":"trace[987123530] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:445; }","duration":"104.257856ms","start":"2025-02-24T12:35:35.031752Z","end":"2025-02-24T12:35:35.136010Z","steps":["trace[987123530] 'agreement among raft nodes before linearized reading'  (duration: 104.050324ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:40:53 up  3:23,  0 users,  load average: 0.69, 1.47, 2.12
	Linux addons-961822 5.15.0-1077-aws #84~20.04.1-Ubuntu SMP Mon Jan 20 22:14:27 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [87ce6834e71fcfae275d9c80ea31a729983a4b6e8f8bff8410a46621e2e02993] <==
	I0224 12:38:45.843436       1 main.go:301] handling current node
	I0224 12:38:55.842437       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0224 12:38:55.842479       1 main.go:301] handling current node
	I0224 12:39:05.842311       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0224 12:39:05.842353       1 main.go:301] handling current node
	I0224 12:39:15.842362       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0224 12:39:15.842398       1 main.go:301] handling current node
	I0224 12:39:25.842619       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0224 12:39:25.842652       1 main.go:301] handling current node
	I0224 12:39:35.843026       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0224 12:39:35.843059       1 main.go:301] handling current node
	I0224 12:39:45.847311       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0224 12:39:45.847425       1 main.go:301] handling current node
	I0224 12:39:55.843140       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0224 12:39:55.843176       1 main.go:301] handling current node
	I0224 12:40:05.842323       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0224 12:40:05.842355       1 main.go:301] handling current node
	I0224 12:40:15.842297       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0224 12:40:15.842439       1 main.go:301] handling current node
	I0224 12:40:25.842282       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0224 12:40:25.842313       1 main.go:301] handling current node
	I0224 12:40:35.842573       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0224 12:40:35.842613       1 main.go:301] handling current node
	I0224 12:40:45.842289       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0224 12:40:45.842325       1 main.go:301] handling current node
	
	
	==> kube-apiserver [29242c5874bbbf26bfa00efd8af210a923927ad4cf2345374ae64326129156db] <==
	I0224 12:37:54.572872       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.103.81.15"}
	I0224 12:38:23.110269       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0224 12:38:24.155658       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0224 12:38:25.204884       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0224 12:38:28.789594       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0224 12:38:29.733772       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0224 12:38:30.184113       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.97.135.27"}
	I0224 12:38:48.303058       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0224 12:38:48.303803       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0224 12:38:48.330084       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0224 12:38:48.330137       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0224 12:38:48.341696       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0224 12:38:48.341818       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0224 12:38:48.363847       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0224 12:38:48.363975       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0224 12:38:48.534877       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0224 12:38:48.535026       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0224 12:38:49.341665       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0224 12:38:49.530566       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0224 12:38:49.568988       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E0224 12:39:24.823301       1 authentication.go:74] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0224 12:39:24.833348       1 authentication.go:74] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0224 12:39:24.843966       1 authentication.go:74] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0224 12:39:39.845049       1 authentication.go:74] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0224 12:40:51.961459       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.97.81.45"}
	
	
	==> kube-controller-manager [8535581458ee5538a31fe09881e64c262052b6e6af4705a19e84db6855ed6e2d] <==
	E0224 12:40:11.503109       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0224 12:40:12.221594       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="local-path-storage"
	I0224 12:40:13.780632       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/cloud-spanner-emulator-754dc876cd" duration="6.071µs"
	W0224 12:40:24.076062       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0224 12:40:24.077285       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
	W0224 12:40:24.078375       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0224 12:40:24.078414       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0224 12:40:39.874609       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0224 12:40:39.875795       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshots"
	W0224 12:40:39.876825       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0224 12:40:39.876861       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0224 12:40:43.619387       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0224 12:40:43.620379       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotcontents"
	W0224 12:40:43.621302       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0224 12:40:43.621342       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0224 12:40:46.065954       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0224 12:40:46.067182       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotclasses"
	W0224 12:40:46.068257       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0224 12:40:46.068353       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0224 12:40:51.744025       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="62.774574ms"
	I0224 12:40:51.765910       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="21.841242ms"
	I0224 12:40:51.765985       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="35.782µs"
	I0224 12:40:51.766051       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="17.485µs"
	I0224 12:40:53.665662       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="13.67605ms"
	I0224 12:40:53.666232       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="60.233µs"
	
	
	==> kube-proxy [e981206088e51a6605a295b08e926ff3b9b2ff8bedebdcbdb0507ac8b5114333] <==
	I0224 12:35:35.381668       1 server_linux.go:66] "Using iptables proxy"
	I0224 12:35:36.100773       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0224 12:35:36.100951       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0224 12:35:36.583207       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0224 12:35:36.583423       1 server_linux.go:170] "Using iptables Proxier"
	I0224 12:35:36.612622       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0224 12:35:36.613023       1 server.go:497] "Version info" version="v1.32.2"
	I0224 12:35:36.613838       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0224 12:35:36.635165       1 config.go:199] "Starting service config controller"
	I0224 12:35:36.635330       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0224 12:35:36.635383       1 config.go:105] "Starting endpoint slice config controller"
	I0224 12:35:36.635389       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0224 12:35:36.635887       1 config.go:329] "Starting node config controller"
	I0224 12:35:36.635908       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0224 12:35:36.737698       1 shared_informer.go:320] Caches are synced for service config
	I0224 12:35:36.737879       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0224 12:35:36.740071       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [498e3f2e24286d6b87445210ade5184b4412d670b1b5a65b8ca173e6252932ed] <==
	I0224 12:35:21.851629       1 serving.go:386] Generated self-signed cert in-memory
	W0224 12:35:23.616996       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0224 12:35:23.617105       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0224 12:35:23.617141       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0224 12:35:23.617181       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0224 12:35:23.638189       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.2"
	I0224 12:35:23.638310       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0224 12:35:23.641457       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0224 12:35:23.641920       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0224 12:35:23.648627       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0224 12:35:23.641953       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	W0224 12:35:23.660379       1 reflector.go:569] runtime/asm_arm64.s:1223: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0224 12:35:23.660509       1 reflector.go:166] "Unhandled Error" err="runtime/asm_arm64.s:1223: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0224 12:35:24.749772       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 24 12:40:24 addons-961822 kubelet[1472]: E0224 12:40:24.722484    1472 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/01de2849627ab17fc19e39eccd5bc81003a523b43e4c7e4def14617004ab174f/diff" to get inode usage: stat /var/lib/containers/storage/overlay/01de2849627ab17fc19e39eccd5bc81003a523b43e4c7e4def14617004ab174f/diff: no such file or directory, extraDiskErr: <nil>
	Feb 24 12:40:24 addons-961822 kubelet[1472]: E0224 12:40:24.725692    1472 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/5b110c5a94c2bfdf62b95841882f76d5f4564de7465dc2c70ddc53c5f9c79d34/diff" to get inode usage: stat /var/lib/containers/storage/overlay/5b110c5a94c2bfdf62b95841882f76d5f4564de7465dc2c70ddc53c5f9c79d34/diff: no such file or directory, extraDiskErr: <nil>
	Feb 24 12:40:24 addons-961822 kubelet[1472]: E0224 12:40:24.727853    1472 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/82b1188dd8967464123462785dece32e31873e9adc205d170641da37c931816b/diff" to get inode usage: stat /var/lib/containers/storage/overlay/82b1188dd8967464123462785dece32e31873e9adc205d170641da37c931816b/diff: no such file or directory, extraDiskErr: <nil>
	Feb 24 12:40:24 addons-961822 kubelet[1472]: E0224 12:40:24.727971    1472 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/8f1075327ebb30714b0e000f14cc036a50c997c5f1aa4c71243602d915d3230e/diff" to get inode usage: stat /var/lib/containers/storage/overlay/8f1075327ebb30714b0e000f14cc036a50c997c5f1aa4c71243602d915d3230e/diff: no such file or directory, extraDiskErr: <nil>
	Feb 24 12:40:24 addons-961822 kubelet[1472]: E0224 12:40:24.729140    1472 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/8f1075327ebb30714b0e000f14cc036a50c997c5f1aa4c71243602d915d3230e/diff" to get inode usage: stat /var/lib/containers/storage/overlay/8f1075327ebb30714b0e000f14cc036a50c997c5f1aa4c71243602d915d3230e/diff: no such file or directory, extraDiskErr: <nil>
	Feb 24 12:40:24 addons-961822 kubelet[1472]: E0224 12:40:24.731376    1472 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/7d3e1da1dc7b23a3107a86521a7ffaab18248422a0cdcd6fa4a2dcf4fb02344c/diff" to get inode usage: stat /var/lib/containers/storage/overlay/7d3e1da1dc7b23a3107a86521a7ffaab18248422a0cdcd6fa4a2dcf4fb02344c/diff: no such file or directory, extraDiskErr: <nil>
	Feb 24 12:40:24 addons-961822 kubelet[1472]: E0224 12:40:24.732590    1472 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/7d3e1da1dc7b23a3107a86521a7ffaab18248422a0cdcd6fa4a2dcf4fb02344c/diff" to get inode usage: stat /var/lib/containers/storage/overlay/7d3e1da1dc7b23a3107a86521a7ffaab18248422a0cdcd6fa4a2dcf4fb02344c/diff: no such file or directory, extraDiskErr: <nil>
	Feb 24 12:40:24 addons-961822 kubelet[1472]: E0224 12:40:24.737075    1472 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/874fde4d996faf95bbd33216f78b256722a6c14cb36266421afae99289a55be3/diff" to get inode usage: stat /var/lib/containers/storage/overlay/874fde4d996faf95bbd33216f78b256722a6c14cb36266421afae99289a55be3/diff: no such file or directory, extraDiskErr: <nil>
	Feb 24 12:40:24 addons-961822 kubelet[1472]: E0224 12:40:24.737212    1472 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/85b3676f7736407ece4de0de6259c47cf4075505a841bba308a54e033e7ea653/diff" to get inode usage: stat /var/lib/containers/storage/overlay/85b3676f7736407ece4de0de6259c47cf4075505a841bba308a54e033e7ea653/diff: no such file or directory, extraDiskErr: <nil>
	Feb 24 12:40:24 addons-961822 kubelet[1472]: E0224 12:40:24.744171    1472 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/874fde4d996faf95bbd33216f78b256722a6c14cb36266421afae99289a55be3/diff" to get inode usage: stat /var/lib/containers/storage/overlay/874fde4d996faf95bbd33216f78b256722a6c14cb36266421afae99289a55be3/diff: no such file or directory, extraDiskErr: <nil>
	Feb 24 12:40:24 addons-961822 kubelet[1472]: E0224 12:40:24.878955    1472 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1740400824878736809,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:605733,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 24 12:40:24 addons-961822 kubelet[1472]: E0224 12:40:24.879008    1472 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1740400824878736809,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:605733,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 24 12:40:24 addons-961822 kubelet[1472]: E0224 12:40:24.909273    1472 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/f1e3c759c36c570ca1e895dfc0c02387881b1de630620b1ddef1cd52935fdc28/diff" to get inode usage: stat /var/lib/containers/storage/overlay/f1e3c759c36c570ca1e895dfc0c02387881b1de630620b1ddef1cd52935fdc28/diff: no such file or directory, extraDiskErr: <nil>
	Feb 24 12:40:24 addons-961822 kubelet[1472]: E0224 12:40:24.965205    1472 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/8e5dbdbae573b4a095a27454da6dbc30d79c20f9cd038a57a6f0bbde49b27f55/diff" to get inode usage: stat /var/lib/containers/storage/overlay/8e5dbdbae573b4a095a27454da6dbc30d79c20f9cd038a57a6f0bbde49b27f55/diff: no such file or directory, extraDiskErr: <nil>
	Feb 24 12:40:24 addons-961822 kubelet[1472]: E0224 12:40:24.982376    1472 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/8e5dbdbae573b4a095a27454da6dbc30d79c20f9cd038a57a6f0bbde49b27f55/diff" to get inode usage: stat /var/lib/containers/storage/overlay/8e5dbdbae573b4a095a27454da6dbc30d79c20f9cd038a57a6f0bbde49b27f55/diff: no such file or directory, extraDiskErr: <nil>
	Feb 24 12:40:25 addons-961822 kubelet[1472]: E0224 12:40:25.032808    1472 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/f1e3c759c36c570ca1e895dfc0c02387881b1de630620b1ddef1cd52935fdc28/diff" to get inode usage: stat /var/lib/containers/storage/overlay/f1e3c759c36c570ca1e895dfc0c02387881b1de630620b1ddef1cd52935fdc28/diff: no such file or directory, extraDiskErr: <nil>
	Feb 24 12:40:25 addons-961822 kubelet[1472]: I0224 12:40:25.033691    1472 scope.go:117] "RemoveContainer" containerID="870ca02416831ebea51e5229485b236008d25c0bd02cdce03b0cb237a29fd3ff"
	Feb 24 12:40:34 addons-961822 kubelet[1472]: E0224 12:40:34.881745    1472 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1740400834881538236,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:605733,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 24 12:40:34 addons-961822 kubelet[1472]: E0224 12:40:34.881788    1472 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1740400834881538236,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:605733,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 24 12:40:44 addons-961822 kubelet[1472]: E0224 12:40:44.889410    1472 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1740400844889133591,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:605733,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 24 12:40:44 addons-961822 kubelet[1472]: E0224 12:40:44.889463    1472 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1740400844889133591,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:605733,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 24 12:40:51 addons-961822 kubelet[1472]: I0224 12:40:51.739818    1472 memory_manager.go:355] "RemoveStaleState removing state" podUID="f404c08b-0cfa-4ce7-a52d-3fd722acdaf8" containerName="cloud-spanner-emulator"
	Feb 24 12:40:51 addons-961822 kubelet[1472]: I0224 12:40:51.739857    1472 memory_manager.go:355] "RemoveStaleState removing state" podUID="fa73a25a-2b70-40d7-aa85-62c84c1ae0c7" containerName="local-path-provisioner"
	Feb 24 12:40:51 addons-961822 kubelet[1472]: I0224 12:40:51.739865    1472 memory_manager.go:355] "RemoveStaleState removing state" podUID="3949f3ac-86fc-4c5b-aac9-c5b83a6adc6e" containerName="helper-pod"
	Feb 24 12:40:51 addons-961822 kubelet[1472]: I0224 12:40:51.843477    1472 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75rvc\" (UniqueName: \"kubernetes.io/projected/c17d3e27-4747-4c5a-bbb3-2d0ea71f88f5-kube-api-access-75rvc\") pod \"hello-world-app-7d9564db4-w6sm6\" (UID: \"c17d3e27-4747-4c5a-bbb3-2d0ea71f88f5\") " pod="default/hello-world-app-7d9564db4-w6sm6"
	
	
	==> storage-provisioner [438fd90059a59726ecb524466aa0051892cc73ecc90bd5eaf9f76e9414d70f81] <==
	I0224 12:36:16.784796       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0224 12:36:16.813289       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0224 12:36:16.813357       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0224 12:36:16.852102       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0224 12:36:16.852278       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-961822_3252b350-e151-499c-b126-34d12351032b!
	I0224 12:36:16.893673       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"655eee44-9ffd-4728-801b-317f9c54e743", APIVersion:"v1", ResourceVersion:"915", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-961822_3252b350-e151-499c-b126-34d12351032b became leader
	I0224 12:36:17.053247       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-961822_3252b350-e151-499c-b126-34d12351032b!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-961822 -n addons-961822
helpers_test.go:261: (dbg) Run:  kubectl --context addons-961822 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-pljqk ingress-nginx-admission-patch-rlz8d
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-961822 describe pod ingress-nginx-admission-create-pljqk ingress-nginx-admission-patch-rlz8d
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-961822 describe pod ingress-nginx-admission-create-pljqk ingress-nginx-admission-patch-rlz8d: exit status 1 (106.813708ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-pljqk" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-rlz8d" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-961822 describe pod ingress-nginx-admission-create-pljqk ingress-nginx-admission-patch-rlz8d: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-961822 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-961822 addons disable ingress-dns --alsologtostderr -v=1: (1.678763492s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-961822 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-961822 addons disable ingress --alsologtostderr -v=1: (7.792998379s)
--- FAIL: TestAddons/parallel/Ingress (155.09s)

                                                
                                    

Test pass (298/331)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 6.31
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.18
9 TestDownloadOnly/v1.20.0/DeleteAll 0.23
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.32.2/json-events 5.99
13 TestDownloadOnly/v1.32.2/preload-exists 0
17 TestDownloadOnly/v1.32.2/LogsDuration 0.09
18 TestDownloadOnly/v1.32.2/DeleteAll 0.22
19 TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds 0.15
21 TestBinaryMirror 0.6
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 176.89
31 TestAddons/serial/GCPAuth/Namespaces 0.23
32 TestAddons/serial/GCPAuth/FakeCredentials 10.04
35 TestAddons/parallel/Registry 17.48
37 TestAddons/parallel/InspektorGadget 11.78
38 TestAddons/parallel/MetricsServer 5.85
40 TestAddons/parallel/CSI 44.35
41 TestAddons/parallel/Headlamp 18.08
42 TestAddons/parallel/CloudSpanner 6.56
43 TestAddons/parallel/LocalPath 53.49
44 TestAddons/parallel/NvidiaDevicePlugin 6.53
45 TestAddons/parallel/Yakd 11.74
47 TestAddons/StoppedEnableDisable 12.24
48 TestCertOptions 37.52
49 TestCertExpiration 246.01
51 TestForceSystemdFlag 41.05
52 TestForceSystemdEnv 39.56
58 TestErrorSpam/setup 31.18
59 TestErrorSpam/start 0.86
60 TestErrorSpam/status 1.07
61 TestErrorSpam/pause 1.81
62 TestErrorSpam/unpause 1.8
63 TestErrorSpam/stop 1.5
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 47.38
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 24.06
70 TestFunctional/serial/KubeContext 0.07
71 TestFunctional/serial/KubectlGetPods 0.11
74 TestFunctional/serial/CacheCmd/cache/add_remote 4.31
75 TestFunctional/serial/CacheCmd/cache/add_local 1.44
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.32
79 TestFunctional/serial/CacheCmd/cache/cache_reload 2.2
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.14
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
83 TestFunctional/serial/ExtraConfig 43.15
84 TestFunctional/serial/ComponentHealth 0.11
85 TestFunctional/serial/LogsCmd 1.74
86 TestFunctional/serial/LogsFileCmd 1.79
87 TestFunctional/serial/InvalidService 4.42
89 TestFunctional/parallel/ConfigCmd 0.52
90 TestFunctional/parallel/DashboardCmd 13.9
91 TestFunctional/parallel/DryRun 0.51
92 TestFunctional/parallel/InternationalLanguage 0.23
93 TestFunctional/parallel/StatusCmd 1.04
97 TestFunctional/parallel/ServiceCmdConnect 13.83
98 TestFunctional/parallel/AddonsCmd 0.3
99 TestFunctional/parallel/PersistentVolumeClaim 26.01
101 TestFunctional/parallel/SSHCmd 0.67
102 TestFunctional/parallel/CpCmd 2.44
104 TestFunctional/parallel/FileSync 0.4
105 TestFunctional/parallel/CertSync 2.11
109 TestFunctional/parallel/NodeLabels 0.14
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.97
113 TestFunctional/parallel/License 0.54
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.73
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.44
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.15
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
125 TestFunctional/parallel/ServiceCmd/DeployApp 8.26
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.42
127 TestFunctional/parallel/ProfileCmd/profile_list 0.43
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.41
129 TestFunctional/parallel/MountCmd/any-port 8.49
130 TestFunctional/parallel/ServiceCmd/List 0.53
131 TestFunctional/parallel/ServiceCmd/JSONOutput 0.53
132 TestFunctional/parallel/ServiceCmd/HTTPS 0.43
133 TestFunctional/parallel/ServiceCmd/Format 0.38
134 TestFunctional/parallel/ServiceCmd/URL 0.4
135 TestFunctional/parallel/MountCmd/specific-port 2.23
136 TestFunctional/parallel/MountCmd/VerifyCleanup 1.91
137 TestFunctional/parallel/Version/short 0.07
138 TestFunctional/parallel/Version/components 1.33
139 TestFunctional/parallel/ImageCommands/ImageListShort 0.25
140 TestFunctional/parallel/ImageCommands/ImageListTable 0.3
141 TestFunctional/parallel/ImageCommands/ImageListJson 0.27
142 TestFunctional/parallel/ImageCommands/ImageListYaml 0.29
143 TestFunctional/parallel/ImageCommands/ImageBuild 3.58
144 TestFunctional/parallel/ImageCommands/Setup 0.78
145 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.86
146 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.3
147 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.19
148 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.54
149 TestFunctional/parallel/ImageCommands/ImageRemove 0.58
150 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.89
151 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.72
152 TestFunctional/parallel/UpdateContextCmd/no_changes 0.19
153 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.15
154 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.17
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 177.15
163 TestMultiControlPlane/serial/DeployApp 8.34
164 TestMultiControlPlane/serial/PingHostFromPods 1.71
165 TestMultiControlPlane/serial/AddWorkerNode 37.4
166 TestMultiControlPlane/serial/NodeLabels 0.14
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.04
168 TestMultiControlPlane/serial/CopyFile 18.91
169 TestMultiControlPlane/serial/StopSecondaryNode 12.7
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.78
171 TestMultiControlPlane/serial/RestartSecondaryNode 23.9
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.59
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 182.18
174 TestMultiControlPlane/serial/DeleteSecondaryNode 12.82
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.74
176 TestMultiControlPlane/serial/StopCluster 35.82
177 TestMultiControlPlane/serial/RestartCluster 58.2
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.75
179 TestMultiControlPlane/serial/AddSecondaryNode 74.68
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.99
184 TestJSONOutput/start/Command 47.84
185 TestJSONOutput/start/Audit 0
187 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/pause/Command 0.74
191 TestJSONOutput/pause/Audit 0
193 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/unpause/Command 0.66
197 TestJSONOutput/unpause/Audit 0
199 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/stop/Command 5.87
203 TestJSONOutput/stop/Audit 0
205 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
207 TestErrorJSONOutput 0.25
209 TestKicCustomNetwork/create_custom_network 36.98
210 TestKicCustomNetwork/use_default_bridge_network 35.73
211 TestKicExistingNetwork 30.65
212 TestKicCustomSubnet 33.3
213 TestKicStaticIP 35.77
214 TestMainNoArgs 0.05
215 TestMinikubeProfile 68.37
218 TestMountStart/serial/StartWithMountFirst 6.24
219 TestMountStart/serial/VerifyMountFirst 0.26
220 TestMountStart/serial/StartWithMountSecond 8.84
221 TestMountStart/serial/VerifyMountSecond 0.25
222 TestMountStart/serial/DeleteFirst 1.62
223 TestMountStart/serial/VerifyMountPostDelete 0.26
224 TestMountStart/serial/Stop 1.21
225 TestMountStart/serial/RestartStopped 7.65
226 TestMountStart/serial/VerifyMountPostStop 0.27
229 TestMultiNode/serial/FreshStart2Nodes 77.96
230 TestMultiNode/serial/DeployApp2Nodes 5.77
231 TestMultiNode/serial/PingHostFrom2Pods 1.03
232 TestMultiNode/serial/AddNode 29.16
233 TestMultiNode/serial/MultiNodeLabels 0.09
234 TestMultiNode/serial/ProfileList 0.67
235 TestMultiNode/serial/CopyFile 10.15
236 TestMultiNode/serial/StopNode 2.24
237 TestMultiNode/serial/StartAfterStop 10.01
238 TestMultiNode/serial/RestartKeepsNodes 83.84
239 TestMultiNode/serial/DeleteNode 5.3
240 TestMultiNode/serial/StopMultiNode 23.85
241 TestMultiNode/serial/RestartMultiNode 57.73
242 TestMultiNode/serial/ValidateNameConflict 32.64
247 TestPreload 125.63
249 TestScheduledStopUnix 110.4
252 TestInsufficientStorage 13.77
253 TestRunningBinaryUpgrade 68.07
255 TestKubernetesUpgrade 390.77
256 TestMissingContainerUpgrade 161.42
258 TestNoKubernetes/serial/StartNoK8sWithVersion 0.13
259 TestNoKubernetes/serial/StartWithK8s 37.69
260 TestNoKubernetes/serial/StartWithStopK8s 30.42
261 TestNoKubernetes/serial/Start 9.79
262 TestNoKubernetes/serial/VerifyK8sNotRunning 0.4
263 TestNoKubernetes/serial/ProfileList 5.84
264 TestNoKubernetes/serial/Stop 1.26
265 TestNoKubernetes/serial/StartNoArgs 7.27
266 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.26
267 TestStoppedBinaryUpgrade/Setup 0.62
268 TestStoppedBinaryUpgrade/Upgrade 88.94
269 TestStoppedBinaryUpgrade/MinikubeLogs 1.36
278 TestPause/serial/Start 51.78
279 TestPause/serial/SecondStartNoReconfiguration 25.41
280 TestPause/serial/Pause 0.78
281 TestPause/serial/VerifyStatus 0.31
282 TestPause/serial/Unpause 0.79
283 TestPause/serial/PauseAgain 1.11
284 TestPause/serial/DeletePaused 2.79
285 TestPause/serial/VerifyDeletedResources 0.45
293 TestNetworkPlugins/group/false 5.2
298 TestStartStop/group/old-k8s-version/serial/FirstStart 154.51
299 TestStartStop/group/old-k8s-version/serial/DeployApp 9.59
300 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.12
301 TestStartStop/group/old-k8s-version/serial/Stop 12.02
302 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.26
303 TestStartStop/group/old-k8s-version/serial/SecondStart 140.51
305 TestStartStop/group/no-preload/serial/FirstStart 71.88
306 TestStartStop/group/no-preload/serial/DeployApp 10.36
307 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.21
308 TestStartStop/group/no-preload/serial/Stop 11.98
309 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
310 TestStartStop/group/no-preload/serial/SecondStart 300.96
311 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
312 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
313 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
314 TestStartStop/group/old-k8s-version/serial/Pause 3.04
316 TestStartStop/group/embed-certs/serial/FirstStart 51
317 TestStartStop/group/embed-certs/serial/DeployApp 11.37
318 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.11
319 TestStartStop/group/embed-certs/serial/Stop 11.98
320 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
321 TestStartStop/group/embed-certs/serial/SecondStart 294.56
322 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
323 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
324 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
325 TestStartStop/group/no-preload/serial/Pause 3.15
327 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 52.5
328 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.37
329 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.43
330 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.09
331 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
332 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 276.14
333 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
334 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
335 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.28
336 TestStartStop/group/embed-certs/serial/Pause 3.15
338 TestStartStop/group/newest-cni/serial/FirstStart 36.96
339 TestStartStop/group/newest-cni/serial/DeployApp 0
340 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.34
341 TestStartStop/group/newest-cni/serial/Stop 1.25
342 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.23
343 TestStartStop/group/newest-cni/serial/SecondStart 15.57
344 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
345 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
346 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
347 TestStartStop/group/newest-cni/serial/Pause 3.08
348 TestNetworkPlugins/group/auto/Start 46.2
349 TestNetworkPlugins/group/auto/KubeletFlags 0.28
350 TestNetworkPlugins/group/auto/NetCatPod 10.33
351 TestNetworkPlugins/group/auto/DNS 0.19
352 TestNetworkPlugins/group/auto/Localhost 0.15
353 TestNetworkPlugins/group/auto/HairPin 0.16
354 TestNetworkPlugins/group/kindnet/Start 47.75
355 TestNetworkPlugins/group/kindnet/ControllerPod 6
356 TestNetworkPlugins/group/kindnet/KubeletFlags 0.28
357 TestNetworkPlugins/group/kindnet/NetCatPod 11.27
358 TestNetworkPlugins/group/kindnet/DNS 0.18
359 TestNetworkPlugins/group/kindnet/Localhost 0.17
360 TestNetworkPlugins/group/kindnet/HairPin 0.17
361 TestNetworkPlugins/group/calico/Start 78.93
362 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
363 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.2
364 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.35
365 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.07
366 TestNetworkPlugins/group/custom-flannel/Start 60.5
367 TestNetworkPlugins/group/calico/ControllerPod 6
368 TestNetworkPlugins/group/calico/KubeletFlags 0.3
369 TestNetworkPlugins/group/calico/NetCatPod 10.28
370 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.32
371 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.27
372 TestNetworkPlugins/group/calico/DNS 0.2
373 TestNetworkPlugins/group/calico/Localhost 0.19
374 TestNetworkPlugins/group/calico/HairPin 0.16
375 TestNetworkPlugins/group/custom-flannel/DNS 0.27
376 TestNetworkPlugins/group/custom-flannel/Localhost 0.29
377 TestNetworkPlugins/group/custom-flannel/HairPin 0.21
378 TestNetworkPlugins/group/enable-default-cni/Start 79.13
379 TestNetworkPlugins/group/flannel/Start 57.93
380 TestNetworkPlugins/group/flannel/ControllerPod 6.01
381 TestNetworkPlugins/group/flannel/KubeletFlags 0.28
382 TestNetworkPlugins/group/flannel/NetCatPod 12.3
383 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.31
384 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.27
385 TestNetworkPlugins/group/flannel/DNS 0.18
386 TestNetworkPlugins/group/flannel/Localhost 0.15
387 TestNetworkPlugins/group/flannel/HairPin 0.15
388 TestNetworkPlugins/group/enable-default-cni/DNS 0.52
389 TestNetworkPlugins/group/enable-default-cni/Localhost 0.24
390 TestNetworkPlugins/group/enable-default-cni/HairPin 0.23
391 TestNetworkPlugins/group/bridge/Start 42.75
392 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
393 TestNetworkPlugins/group/bridge/NetCatPod 10.28
394 TestNetworkPlugins/group/bridge/DNS 0.16
395 TestNetworkPlugins/group/bridge/Localhost 0.14
396 TestNetworkPlugins/group/bridge/HairPin 0.14
x
+
TestDownloadOnly/v1.20.0/json-events (6.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-141372 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-141372 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (6.310890387s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (6.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0224 12:34:29.110654  573823 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0224 12:34:29.110734  573823 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20451-568444/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.18s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-141372
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-141372: exit status 85 (180.22687ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-141372 | jenkins | v1.35.0 | 24 Feb 25 12:34 UTC |          |
	|         | -p download-only-141372        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/24 12:34:22
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0224 12:34:22.850902  573829 out.go:345] Setting OutFile to fd 1 ...
	I0224 12:34:22.851056  573829 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0224 12:34:22.851089  573829 out.go:358] Setting ErrFile to fd 2...
	I0224 12:34:22.851103  573829 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0224 12:34:22.851408  573829 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20451-568444/.minikube/bin
	W0224 12:34:22.851584  573829 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20451-568444/.minikube/config/config.json: open /home/jenkins/minikube-integration/20451-568444/.minikube/config/config.json: no such file or directory
	I0224 12:34:22.852032  573829 out.go:352] Setting JSON to true
	I0224 12:34:22.852967  573829 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":11811,"bootTime":1740388652,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1077-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0224 12:34:22.853048  573829 start.go:139] virtualization:  
	I0224 12:34:22.857261  573829 out.go:97] [download-only-141372] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	W0224 12:34:22.857414  573829 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20451-568444/.minikube/cache/preloaded-tarball: no such file or directory
	I0224 12:34:22.857501  573829 notify.go:220] Checking for updates...
	I0224 12:34:22.860388  573829 out.go:169] MINIKUBE_LOCATION=20451
	I0224 12:34:22.863397  573829 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0224 12:34:22.866388  573829 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20451-568444/kubeconfig
	I0224 12:34:22.869332  573829 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20451-568444/.minikube
	I0224 12:34:22.872179  573829 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0224 12:34:22.877836  573829 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0224 12:34:22.878090  573829 driver.go:394] Setting default libvirt URI to qemu:///system
	I0224 12:34:22.900929  573829 docker.go:123] docker version: linux-28.0.0:Docker Engine - Community
	I0224 12:34:22.901059  573829 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0224 12:34:22.958873  573829 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:true NGoroutines:53 SystemTime:2025-02-24 12:34:22.949182812 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.21.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.33.0]] Warnings:<nil>}}
	I0224 12:34:22.958992  573829 docker.go:318] overlay module found
	I0224 12:34:22.962027  573829 out.go:97] Using the docker driver based on user configuration
	I0224 12:34:22.962065  573829 start.go:297] selected driver: docker
	I0224 12:34:22.962074  573829 start.go:901] validating driver "docker" against <nil>
	I0224 12:34:22.962220  573829 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0224 12:34:23.022401  573829 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:true NGoroutines:53 SystemTime:2025-02-24 12:34:23.012707893 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.21.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.33.0]] Warnings:<nil>}}
	I0224 12:34:23.022628  573829 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0224 12:34:23.022949  573829 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0224 12:34:23.023108  573829 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0224 12:34:23.026331  573829 out.go:169] Using Docker driver with root privileges
	I0224 12:34:23.029222  573829 cni.go:84] Creating CNI manager for ""
	I0224 12:34:23.029285  573829 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0224 12:34:23.029304  573829 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0224 12:34:23.029398  573829 start.go:340] cluster config:
	{Name:download-only-141372 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-141372 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0224 12:34:23.032422  573829 out.go:97] Starting "download-only-141372" primary control-plane node in "download-only-141372" cluster
	I0224 12:34:23.032463  573829 cache.go:121] Beginning downloading kic base image for docker with crio
	I0224 12:34:23.035313  573829 out.go:97] Pulling base image v0.0.46-1740046583-20436 ...
	I0224 12:34:23.035344  573829 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0224 12:34:23.035405  573829 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 in local docker daemon
	I0224 12:34:23.051859  573829 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 to local cache
	I0224 12:34:23.052046  573829 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 in local cache directory
	I0224 12:34:23.052148  573829 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 to local cache
	I0224 12:34:23.093045  573829 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I0224 12:34:23.093076  573829 cache.go:56] Caching tarball of preloaded images
	I0224 12:34:23.093244  573829 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0224 12:34:23.096504  573829 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0224 12:34:23.096529  573829 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I0224 12:34:23.182939  573829 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:59cd2ef07b53f039bfd1761b921f2a02 -> /home/jenkins/minikube-integration/20451-568444/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I0224 12:34:27.274701  573829 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I0224 12:34:27.274785  573829 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20451-568444/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I0224 12:34:27.745851  573829 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 as a tarball
	
	
	* The control-plane node download-only-141372 host does not exist
	  To start a cluster, run: "minikube start -p download-only-141372"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.18s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-141372
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/json-events (5.99s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-961008 --force --alsologtostderr --kubernetes-version=v1.32.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-961008 --force --alsologtostderr --kubernetes-version=v1.32.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.989825365s)
--- PASS: TestDownloadOnly/v1.32.2/json-events (5.99s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/preload-exists
I0224 12:34:35.659117  573823 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
I0224 12:34:35.659157  573823 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20451-568444/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-961008
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-961008: exit status 85 (86.311905ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-141372 | jenkins | v1.35.0 | 24 Feb 25 12:34 UTC |                     |
	|         | -p download-only-141372        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.35.0 | 24 Feb 25 12:34 UTC | 24 Feb 25 12:34 UTC |
	| delete  | -p download-only-141372        | download-only-141372 | jenkins | v1.35.0 | 24 Feb 25 12:34 UTC | 24 Feb 25 12:34 UTC |
	| start   | -o=json --download-only        | download-only-961008 | jenkins | v1.35.0 | 24 Feb 25 12:34 UTC |                     |
	|         | -p download-only-961008        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/24 12:34:29
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0224 12:34:29.717716  574026 out.go:345] Setting OutFile to fd 1 ...
	I0224 12:34:29.717976  574026 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0224 12:34:29.718002  574026 out.go:358] Setting ErrFile to fd 2...
	I0224 12:34:29.718020  574026 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0224 12:34:29.718318  574026 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20451-568444/.minikube/bin
	I0224 12:34:29.718820  574026 out.go:352] Setting JSON to true
	I0224 12:34:29.719755  574026 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":11818,"bootTime":1740388652,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1077-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0224 12:34:29.719861  574026 start.go:139] virtualization:  
	I0224 12:34:29.723607  574026 out.go:97] [download-only-961008] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0224 12:34:29.723921  574026 notify.go:220] Checking for updates...
	I0224 12:34:29.727772  574026 out.go:169] MINIKUBE_LOCATION=20451
	I0224 12:34:29.730956  574026 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0224 12:34:29.733927  574026 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20451-568444/kubeconfig
	I0224 12:34:29.736899  574026 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20451-568444/.minikube
	I0224 12:34:29.739867  574026 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0224 12:34:29.745688  574026 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0224 12:34:29.745933  574026 driver.go:394] Setting default libvirt URI to qemu:///system
	I0224 12:34:29.776920  574026 docker.go:123] docker version: linux-28.0.0:Docker Engine - Community
	I0224 12:34:29.777024  574026 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0224 12:34:29.836312  574026 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:44 SystemTime:2025-02-24 12:34:29.826575174 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.21.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.33.0]] Warnings:<nil>}}
	I0224 12:34:29.836454  574026 docker.go:318] overlay module found
	I0224 12:34:29.839429  574026 out.go:97] Using the docker driver based on user configuration
	I0224 12:34:29.839462  574026 start.go:297] selected driver: docker
	I0224 12:34:29.839470  574026 start.go:901] validating driver "docker" against <nil>
	I0224 12:34:29.839574  574026 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0224 12:34:29.888738  574026 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:44 SystemTime:2025-02-24 12:34:29.879879069 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.21.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.33.0]] Warnings:<nil>}}
	I0224 12:34:29.888950  574026 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0224 12:34:29.889243  574026 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0224 12:34:29.889412  574026 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0224 12:34:29.892778  574026 out.go:169] Using Docker driver with root privileges
	I0224 12:34:29.895567  574026 cni.go:84] Creating CNI manager for ""
	I0224 12:34:29.895646  574026 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0224 12:34:29.895668  574026 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0224 12:34:29.895759  574026 start.go:340] cluster config:
	{Name:download-only-961008 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:download-only-961008 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0224 12:34:29.898707  574026 out.go:97] Starting "download-only-961008" primary control-plane node in "download-only-961008" cluster
	I0224 12:34:29.898730  574026 cache.go:121] Beginning downloading kic base image for docker with crio
	I0224 12:34:29.901621  574026 out.go:97] Pulling base image v0.0.46-1740046583-20436 ...
	I0224 12:34:29.901650  574026 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0224 12:34:29.901711  574026 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 in local docker daemon
	I0224 12:34:29.917806  574026 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 to local cache
	I0224 12:34:29.917962  574026 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 in local cache directory
	I0224 12:34:29.917984  574026 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 in local cache directory, skipping pull
	I0224 12:34:29.917989  574026 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 exists in cache, skipping pull
	I0224 12:34:29.917996  574026 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 as a tarball
	I0224 12:34:29.956615  574026 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.2/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-arm64.tar.lz4
	I0224 12:34:29.956644  574026 cache.go:56] Caching tarball of preloaded images
	I0224 12:34:29.956802  574026 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0224 12:34:29.959972  574026 out.go:97] Downloading Kubernetes v1.32.2 preload ...
	I0224 12:34:29.960004  574026 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-arm64.tar.lz4 ...
	I0224 12:34:30.043654  574026 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.2/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-arm64.tar.lz4?checksum=md5:40a74f4030ed7e841ef78821ba704831 -> /home/jenkins/minikube-integration/20451-568444/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-arm64.tar.lz4
	I0224 12:34:34.153174  574026 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-arm64.tar.lz4 ...
	I0224 12:34:34.153291  574026 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20451-568444/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-961008 host does not exist
	  To start a cluster, run: "minikube start -p download-only-961008"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.2/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.32.2/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-961008
--- PASS: TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
I0224 12:34:36.986969  573823 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-526164 --alsologtostderr --binary-mirror http://127.0.0.1:45873 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-526164" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-526164
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-961822
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-961822: exit status 85 (81.311444ms)

                                                
                                                
-- stdout --
	* Profile "addons-961822" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-961822"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-961822
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-961822: exit status 85 (72.205504ms)

                                                
                                                
-- stdout --
	* Profile "addons-961822" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-961822"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (176.89s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-arm64 start -p addons-961822 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-arm64 start -p addons-961822 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m56.886552055s)
--- PASS: TestAddons/Setup (176.89s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.23s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-961822 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-961822 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.23s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.04s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-961822 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-961822 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [74c33457-f02e-4af7-ba80-6166079915eb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [74c33457-f02e-4af7-ba80-6166079915eb] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003595378s
addons_test.go:633: (dbg) Run:  kubectl --context addons-961822 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-961822 describe sa gcp-auth-test
addons_test.go:659: (dbg) Run:  kubectl --context addons-961822 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:683: (dbg) Run:  kubectl --context addons-961822 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.04s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.48s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 6.664996ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c88467877-m7xss" [fda94111-471e-47d3-9d3d-56111eaa4f83] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003493286s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-p98ck" [6fdaebf5-b82c-41a1-9680-3a1085f207c3] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003049169s
addons_test.go:331: (dbg) Run:  kubectl --context addons-961822 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-961822 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-961822 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.476516429s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-arm64 -p addons-961822 ip
2025/02/24 12:38:10 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-961822 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.48s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.78s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-bzzt6" [94a1968d-df09-4221-9449-ca28bd7a1457] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004709117s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-961822 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-961822 addons disable inspektor-gadget --alsologtostderr -v=1: (5.771420136s)
--- PASS: TestAddons/parallel/InspektorGadget (11.78s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.85s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 5.962524ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-t2ggm" [42d3fd5b-ac3a-415a-8cf4-8db80e497487] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004293433s
addons_test.go:402: (dbg) Run:  kubectl --context addons-961822 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-961822 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.85s)

                                                
                                    
x
+
TestAddons/parallel/CSI (44.35s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0224 12:38:11.177617  573823 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0224 12:38:11.181348  573823 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0224 12:38:11.181379  573823 kapi.go:107] duration metric: took 9.471181ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 9.483235ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-961822 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-961822 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-961822 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-961822 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-961822 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-961822 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-961822 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-961822 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [96cffad7-9655-45c7-8109-534de8e383fa] Pending
helpers_test.go:344: "task-pv-pod" [96cffad7-9655-45c7-8109-534de8e383fa] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [96cffad7-9655-45c7-8109-534de8e383fa] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.002897047s
addons_test.go:511: (dbg) Run:  kubectl --context addons-961822 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-961822 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-961822 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-961822 delete pod task-pv-pod
addons_test.go:521: (dbg) Done: kubectl --context addons-961822 delete pod task-pv-pod: (1.350703932s)
addons_test.go:527: (dbg) Run:  kubectl --context addons-961822 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-961822 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-961822 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-961822 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-961822 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-961822 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-961822 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-961822 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-961822 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-961822 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-961822 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [4e68dea2-186f-4272-a623-142552b1de6b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [4e68dea2-186f-4272-a623-142552b1de6b] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003768362s
addons_test.go:553: (dbg) Run:  kubectl --context addons-961822 delete pod task-pv-pod-restore
addons_test.go:553: (dbg) Done: kubectl --context addons-961822 delete pod task-pv-pod-restore: (1.306575392s)
addons_test.go:557: (dbg) Run:  kubectl --context addons-961822 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-961822 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-961822 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-961822 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-961822 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.774569482s)
--- PASS: TestAddons/parallel/CSI (44.35s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.08s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-961822 --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5d4b5d7bd6-mfszd" [4cd0dc9c-5e78-4452-b030-a04bf46223c0] Pending
helpers_test.go:344: "headlamp-5d4b5d7bd6-mfszd" [4cd0dc9c-5e78-4452-b030-a04bf46223c0] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5d4b5d7bd6-mfszd" [4cd0dc9c-5e78-4452-b030-a04bf46223c0] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.003837199s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-961822 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-961822 addons disable headlamp --alsologtostderr -v=1: (6.088203141s)
--- PASS: TestAddons/parallel/Headlamp (18.08s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.56s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-754dc876cd-bsd87" [f404c08b-0cfa-4ce7-a52d-3fd722acdaf8] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003488328s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-961822 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.56s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.49s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-961822 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-961822 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-961822 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-961822 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-961822 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-961822 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-961822 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-961822 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [887678b5-3c84-4851-bf53-c1270a6e3158] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [887678b5-3c84-4851-bf53-c1270a6e3158] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [887678b5-3c84-4851-bf53-c1270a6e3158] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003548979s
addons_test.go:906: (dbg) Run:  kubectl --context addons-961822 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-arm64 -p addons-961822 ssh "cat /opt/local-path-provisioner/pvc-6aadecd2-05d5-42b7-a214-c452c036cc3a_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-961822 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-961822 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-961822 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-961822 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.384674483s)
--- PASS: TestAddons/parallel/LocalPath (53.49s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.53s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-c8kxf" [0fd34bf3-198a-4041-9a8e-8ff90d9b3dc1] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003889978s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-961822 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.53s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.74s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-xlhkd" [221126dd-0632-494b-8258-5e70f374fca8] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003408916s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-961822 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-961822 addons disable yakd --alsologtostderr -v=1: (5.73705469s)
--- PASS: TestAddons/parallel/Yakd (11.74s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.24s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-961822
addons_test.go:170: (dbg) Done: out/minikube-linux-arm64 stop -p addons-961822: (11.948040662s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-961822
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-961822
addons_test.go:183: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-961822
--- PASS: TestAddons/StoppedEnableDisable (12.24s)

                                                
                                    
x
+
TestCertOptions (37.52s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-912092 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-912092 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (34.767574324s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-912092 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-912092 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-912092 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-912092" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-912092
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-912092: (2.089313827s)
--- PASS: TestCertOptions (37.52s)

                                                
                                    
x
+
TestCertExpiration (246.01s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-094336 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-094336 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (43.496583454s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-094336 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-094336 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (19.83458339s)
helpers_test.go:175: Cleaning up "cert-expiration-094336" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-094336
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-094336: (2.677998802s)
--- PASS: TestCertExpiration (246.01s)

                                                
                                    
x
+
TestForceSystemdFlag (41.05s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-203280 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-203280 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (37.966373724s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-203280 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-203280" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-203280
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-203280: (2.690077002s)
--- PASS: TestForceSystemdFlag (41.05s)

                                                
                                    
x
+
TestForceSystemdEnv (39.56s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-716615 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-716615 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (36.501026455s)
helpers_test.go:175: Cleaning up "force-systemd-env-716615" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-716615
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-716615: (3.056906091s)
--- PASS: TestForceSystemdEnv (39.56s)

                                                
                                    
x
+
TestErrorSpam/setup (31.18s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-490678 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-490678 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-490678 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-490678 --driver=docker  --container-runtime=crio: (31.180749531s)
--- PASS: TestErrorSpam/setup (31.18s)

                                                
                                    
x
+
TestErrorSpam/start (0.86s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-490678 --log_dir /tmp/nospam-490678 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-490678 --log_dir /tmp/nospam-490678 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-490678 --log_dir /tmp/nospam-490678 start --dry-run
--- PASS: TestErrorSpam/start (0.86s)

                                                
                                    
x
+
TestErrorSpam/status (1.07s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-490678 --log_dir /tmp/nospam-490678 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-490678 --log_dir /tmp/nospam-490678 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-490678 --log_dir /tmp/nospam-490678 status
--- PASS: TestErrorSpam/status (1.07s)

                                                
                                    
x
+
TestErrorSpam/pause (1.81s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-490678 --log_dir /tmp/nospam-490678 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-490678 --log_dir /tmp/nospam-490678 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-490678 --log_dir /tmp/nospam-490678 pause
--- PASS: TestErrorSpam/pause (1.81s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.8s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-490678 --log_dir /tmp/nospam-490678 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-490678 --log_dir /tmp/nospam-490678 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-490678 --log_dir /tmp/nospam-490678 unpause
--- PASS: TestErrorSpam/unpause (1.80s)

                                                
                                    
x
+
TestErrorSpam/stop (1.5s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-490678 --log_dir /tmp/nospam-490678 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-490678 --log_dir /tmp/nospam-490678 stop: (1.279707362s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-490678 --log_dir /tmp/nospam-490678 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-490678 --log_dir /tmp/nospam-490678 stop
--- PASS: TestErrorSpam/stop (1.50s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1872: local sync path: /home/jenkins/minikube-integration/20451-568444/.minikube/files/etc/test/nested/copy/573823/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (47.38s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2251: (dbg) Run:  out/minikube-linux-arm64 start -p functional-307816 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E0224 12:42:35.538300  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/addons-961822/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:42:35.545398  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/addons-961822/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:42:35.556874  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/addons-961822/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:42:35.578269  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/addons-961822/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:42:35.619638  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/addons-961822/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:42:35.701057  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/addons-961822/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:42:35.862446  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/addons-961822/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:42:36.184146  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/addons-961822/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:42:36.826178  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/addons-961822/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:42:38.107485  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/addons-961822/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:42:40.668810  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/addons-961822/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:42:45.790255  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/addons-961822/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2251: (dbg) Done: out/minikube-linux-arm64 start -p functional-307816 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (47.379086534s)
--- PASS: TestFunctional/serial/StartWithProxy (47.38s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (24.06s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0224 12:42:53.468948  573823 config.go:182] Loaded profile config "functional-307816": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
functional_test.go:676: (dbg) Run:  out/minikube-linux-arm64 start -p functional-307816 --alsologtostderr -v=8
E0224 12:42:56.032378  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/addons-961822/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:43:16.514641  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/addons-961822/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:676: (dbg) Done: out/minikube-linux-arm64 start -p functional-307816 --alsologtostderr -v=8: (24.052954204s)
functional_test.go:680: soft start took 24.058175908s for "functional-307816" cluster.
I0224 12:43:17.522252  573823 config.go:182] Loaded profile config "functional-307816": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestFunctional/serial/SoftStart (24.06s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:698: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:713: (dbg) Run:  kubectl --context functional-307816 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1066: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 cache add registry.k8s.io/pause:3.1
functional_test.go:1066: (dbg) Done: out/minikube-linux-arm64 -p functional-307816 cache add registry.k8s.io/pause:3.1: (1.488624592s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 cache add registry.k8s.io/pause:3.3
functional_test.go:1066: (dbg) Done: out/minikube-linux-arm64 -p functional-307816 cache add registry.k8s.io/pause:3.3: (1.477320601s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 cache add registry.k8s.io/pause:latest
functional_test.go:1066: (dbg) Done: out/minikube-linux-arm64 -p functional-307816 cache add registry.k8s.io/pause:latest: (1.346244581s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.44s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1094: (dbg) Run:  docker build -t minikube-local-cache-test:functional-307816 /tmp/TestFunctionalserialCacheCmdcacheadd_local1322218178/001
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 cache add minikube-local-cache-test:functional-307816
functional_test.go:1111: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 cache delete minikube-local-cache-test:functional-307816
functional_test.go:1100: (dbg) Run:  docker rmi minikube-local-cache-test:functional-307816
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.44s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1119: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1127: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1141: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1164: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-307816 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (310.259233ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1175: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 cache reload
functional_test.go:1175: (dbg) Done: out/minikube-linux-arm64 -p functional-307816 cache reload: (1.245978311s)
functional_test.go:1180: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1189: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1189: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:733: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 kubectl -- --context functional-307816 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:758: (dbg) Run:  out/kubectl --context functional-307816 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (43.15s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:774: (dbg) Run:  out/minikube-linux-arm64 start -p functional-307816 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0224 12:43:57.477389  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/addons-961822/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:774: (dbg) Done: out/minikube-linux-arm64 start -p functional-307816 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (43.151979091s)
functional_test.go:778: restart took 43.152075665s for "functional-307816" cluster.
I0224 12:44:09.632792  573823 config.go:182] Loaded profile config "functional-307816": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestFunctional/serial/ExtraConfig (43.15s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:827: (dbg) Run:  kubectl --context functional-307816 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:842: etcd phase: Running
functional_test.go:852: etcd status: Ready
functional_test.go:842: kube-apiserver phase: Running
functional_test.go:852: kube-apiserver status: Ready
functional_test.go:842: kube-controller-manager phase: Running
functional_test.go:852: kube-controller-manager status: Ready
functional_test.go:842: kube-scheduler phase: Running
functional_test.go:852: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.74s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1253: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 logs
functional_test.go:1253: (dbg) Done: out/minikube-linux-arm64 -p functional-307816 logs: (1.738868625s)
--- PASS: TestFunctional/serial/LogsCmd (1.74s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.79s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1267: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 logs --file /tmp/TestFunctionalserialLogsFileCmd3325940611/001/logs.txt
functional_test.go:1267: (dbg) Done: out/minikube-linux-arm64 -p functional-307816 logs --file /tmp/TestFunctionalserialLogsFileCmd3325940611/001/logs.txt: (1.793049366s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.79s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.42s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2338: (dbg) Run:  kubectl --context functional-307816 apply -f testdata/invalidsvc.yaml
functional_test.go:2352: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-307816
functional_test.go:2352: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-307816: exit status 115 (710.83945ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30853 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2344: (dbg) Run:  kubectl --context functional-307816 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.42s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1216: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-307816 config get cpus: exit status 14 (101.524471ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1216: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 config set cpus 2
functional_test.go:1216: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 config get cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-307816 config get cpus: exit status 14 (71.708976ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:922: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-307816 --alsologtostderr -v=1]
functional_test.go:927: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-307816 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 600324: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.90s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-307816 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:991: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-307816 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (197.67518ms)

                                                
                                                
-- stdout --
	* [functional-307816] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20451
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20451-568444/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20451-568444/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0224 12:44:54.290181  599763 out.go:345] Setting OutFile to fd 1 ...
	I0224 12:44:54.291316  599763 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0224 12:44:54.291518  599763 out.go:358] Setting ErrFile to fd 2...
	I0224 12:44:54.291553  599763 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0224 12:44:54.291879  599763 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20451-568444/.minikube/bin
	I0224 12:44:54.292325  599763 out.go:352] Setting JSON to false
	I0224 12:44:54.293280  599763 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":12442,"bootTime":1740388652,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1077-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0224 12:44:54.293356  599763 start.go:139] virtualization:  
	I0224 12:44:54.296720  599763 out.go:177] * [functional-307816] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0224 12:44:54.300409  599763 out.go:177]   - MINIKUBE_LOCATION=20451
	I0224 12:44:54.300563  599763 notify.go:220] Checking for updates...
	I0224 12:44:54.303956  599763 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0224 12:44:54.306893  599763 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20451-568444/kubeconfig
	I0224 12:44:54.309876  599763 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20451-568444/.minikube
	I0224 12:44:54.312815  599763 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0224 12:44:54.315574  599763 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0224 12:44:54.318820  599763 config.go:182] Loaded profile config "functional-307816": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0224 12:44:54.319459  599763 driver.go:394] Setting default libvirt URI to qemu:///system
	I0224 12:44:54.352929  599763 docker.go:123] docker version: linux-28.0.0:Docker Engine - Community
	I0224 12:44:54.353040  599763 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0224 12:44:54.417043  599763 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-02-24 12:44:54.408321823 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.21.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.33.0]] Warnings:<nil>}}
	I0224 12:44:54.417163  599763 docker.go:318] overlay module found
	I0224 12:44:54.420227  599763 out.go:177] * Using the docker driver based on existing profile
	I0224 12:44:54.423052  599763 start.go:297] selected driver: docker
	I0224 12:44:54.423068  599763 start.go:901] validating driver "docker" against &{Name:functional-307816 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:functional-307816 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0224 12:44:54.423178  599763 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0224 12:44:54.426721  599763 out.go:201] 
	W0224 12:44:54.429607  599763 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0224 12:44:54.432724  599763 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:1008: (dbg) Run:  out/minikube-linux-arm64 start -p functional-307816 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1037: (dbg) Run:  out/minikube-linux-arm64 start -p functional-307816 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-307816 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (227.686107ms)

                                                
                                                
-- stdout --
	* [functional-307816] minikube v1.35.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20451
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20451-568444/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20451-568444/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0224 12:44:54.077870  599716 out.go:345] Setting OutFile to fd 1 ...
	I0224 12:44:54.078054  599716 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0224 12:44:54.078060  599716 out.go:358] Setting ErrFile to fd 2...
	I0224 12:44:54.078066  599716 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0224 12:44:54.079078  599716 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20451-568444/.minikube/bin
	I0224 12:44:54.079540  599716 out.go:352] Setting JSON to false
	I0224 12:44:54.080432  599716 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":12442,"bootTime":1740388652,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1077-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0224 12:44:54.080504  599716 start.go:139] virtualization:  
	I0224 12:44:54.084313  599716 out.go:177] * [functional-307816] minikube v1.35.0 sur Ubuntu 20.04 (arm64)
	I0224 12:44:54.088083  599716 out.go:177]   - MINIKUBE_LOCATION=20451
	I0224 12:44:54.088292  599716 notify.go:220] Checking for updates...
	I0224 12:44:54.094263  599716 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0224 12:44:54.097200  599716 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20451-568444/kubeconfig
	I0224 12:44:54.100078  599716 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20451-568444/.minikube
	I0224 12:44:54.102957  599716 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0224 12:44:54.105898  599716 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0224 12:44:54.109356  599716 config.go:182] Loaded profile config "functional-307816": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0224 12:44:54.109975  599716 driver.go:394] Setting default libvirt URI to qemu:///system
	I0224 12:44:54.138740  599716 docker.go:123] docker version: linux-28.0.0:Docker Engine - Community
	I0224 12:44:54.138887  599716 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0224 12:44:54.218849  599716 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-02-24 12:44:54.209171499 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.21.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.33.0]] Warnings:<nil>}}
	I0224 12:44:54.218986  599716 docker.go:318] overlay module found
	I0224 12:44:54.222205  599716 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0224 12:44:54.225137  599716 start.go:297] selected driver: docker
	I0224 12:44:54.225162  599716 start.go:901] validating driver "docker" against &{Name:functional-307816 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:functional-307816 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0224 12:44:54.225273  599716 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0224 12:44:54.228820  599716 out.go:201] 
	W0224 12:44:54.231701  599716 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0224 12:44:54.234524  599716 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:871: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 status
functional_test.go:877: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:889: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (13.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1644: (dbg) Run:  kubectl --context functional-307816 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1652: (dbg) Run:  kubectl --context functional-307816 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-8449669db6-hxmlb" [0f6ddc38-5e71-45e2-a4b3-da801f78a612] Pending
helpers_test.go:344: "hello-node-connect-8449669db6-hxmlb" [0f6ddc38-5e71-45e2-a4b3-da801f78a612] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-8449669db6-hxmlb" [0f6ddc38-5e71-45e2-a4b3-da801f78a612] Running
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 13.011091037s
functional_test.go:1666: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 service hello-node-connect --url
functional_test.go:1672: found endpoint for hello-node-connect: http://192.168.49.2:32229
functional_test.go:1692: http://192.168.49.2:32229: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-8449669db6-hxmlb

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32229
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (13.83s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 addons list
functional_test.go:1719: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [a6ff5972-a96d-4fb4-889d-9ea3e94bf937] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003458446s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-307816 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-307816 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-307816 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-307816 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [cb888101-ed6c-46a4-b89d-b2f8f0f18023] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [cb888101-ed6c-46a4-b89d-b2f8f0f18023] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.002796375s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-307816 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-307816 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-307816 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [20d9b3bd-e5ab-4122-8939-86285ae12467] Pending
helpers_test.go:344: "sp-pod" [20d9b3bd-e5ab-4122-8939-86285ae12467] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003447465s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-307816 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.01s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 ssh "echo hello"
functional_test.go:1759: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 ssh -n functional-307816 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 cp functional-307816:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2519318053/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 ssh -n functional-307816 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 ssh -n functional-307816 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.44s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1946: Checking for existence of /etc/test/nested/copy/573823/hosts within VM
functional_test.go:1948: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 ssh "sudo cat /etc/test/nested/copy/573823/hosts"
functional_test.go:1953: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1989: Checking for existence of /etc/ssl/certs/573823.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 ssh "sudo cat /etc/ssl/certs/573823.pem"
functional_test.go:1989: Checking for existence of /usr/share/ca-certificates/573823.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 ssh "sudo cat /usr/share/ca-certificates/573823.pem"
functional_test.go:1989: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/5738232.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 ssh "sudo cat /etc/ssl/certs/5738232.pem"
functional_test.go:2016: Checking for existence of /usr/share/ca-certificates/5738232.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 ssh "sudo cat /usr/share/ca-certificates/5738232.pem"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.11s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:236: (dbg) Run:  kubectl --context functional-307816 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2044: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 ssh "sudo systemctl is-active docker"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-307816 ssh "sudo systemctl is-active docker": exit status 1 (554.222011ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2044: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 ssh "sudo systemctl is-active containerd"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-307816 ssh "sudo systemctl is-active containerd": exit status 1 (419.81287ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2305: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-307816 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-307816 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-307816 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-307816 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 597625: os: process already finished
helpers_test.go:508: unable to kill pid 597428: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-307816 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-307816 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [04f621aa-d172-470a-8813-864f7c212d1a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [04f621aa-d172-470a-8813-864f7c212d1a] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.003472921s
I0224 12:44:28.057480  573823 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.44s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-307816 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.102.133.142 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-307816 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1454: (dbg) Run:  kubectl --context functional-307816 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1462: (dbg) Run:  kubectl --context functional-307816 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64fc58db8c-t5xgc" [c6016eaa-8f3e-427b-aa57-c47e4ada7d57] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64fc58db8c-t5xgc" [c6016eaa-8f3e-427b-aa57-c47e4ada7d57] Running
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.003535945s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.26s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1287: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1292: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1327: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1332: Took "365.433039ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1341: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1346: Took "62.355758ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1378: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1383: Took "353.958449ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1391: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1396: Took "59.012839ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-307816 /tmp/TestFunctionalparallelMountCmdany-port2260830600/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1740401087409425804" to /tmp/TestFunctionalparallelMountCmdany-port2260830600/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1740401087409425804" to /tmp/TestFunctionalparallelMountCmdany-port2260830600/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1740401087409425804" to /tmp/TestFunctionalparallelMountCmdany-port2260830600/001/test-1740401087409425804
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Feb 24 12:44 created-by-test
-rw-r--r-- 1 docker docker 24 Feb 24 12:44 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Feb 24 12:44 test-1740401087409425804
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 ssh cat /mount-9p/test-1740401087409425804
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-307816 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [c30b78a0-1583-4940-864d-09c03c7d2be0] Pending
helpers_test.go:344: "busybox-mount" [c30b78a0-1583-4940-864d-09c03c7d2be0] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [c30b78a0-1583-4940-864d-09c03c7d2be0] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [c30b78a0-1583-4940-864d-09c03c7d2be0] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.003105891s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-307816 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-307816 /tmp/TestFunctionalparallelMountCmdany-port2260830600/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1476: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1506: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 service list -o json
functional_test.go:1511: Took "526.51485ms" to run "out/minikube-linux-arm64 -p functional-307816 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1526: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 service --namespace=default --https --url hello-node
functional_test.go:1539: found endpoint: https://192.168.49.2:32426
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1557: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1576: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 service hello-node --url
functional_test.go:1582: found endpoint for hello-node: http://192.168.49.2:32426
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-307816 /tmp/TestFunctionalparallelMountCmdspecific-port2698803408/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-307816 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (564.242983ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0224 12:44:56.464257  573823 retry.go:31] will retry after 523.877576ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-307816 /tmp/TestFunctionalparallelMountCmdspecific-port2698803408/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-307816 ssh "sudo umount -f /mount-9p": exit status 1 (348.798815ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-307816 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-307816 /tmp/TestFunctionalparallelMountCmdspecific-port2698803408/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.23s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-307816 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1161864482/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-307816 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1161864482/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-307816 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1161864482/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Done: out/minikube-linux-arm64 -p functional-307816 ssh "findmnt -T" /mount1: (1.041719924s)
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-307816 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-307816 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1161864482/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-307816 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1161864482/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-307816 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1161864482/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.91s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2273: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2287: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 version -o=json --components
functional_test.go:2287: (dbg) Done: out/minikube-linux-arm64 -p functional-307816 version -o=json --components: (1.330375104s)
--- PASS: TestFunctional/parallel/Version/components (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:278: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 image ls --format short --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-arm64 -p functional-307816 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.2
registry.k8s.io/kube-proxy:v1.32.2
registry.k8s.io/kube-controller-manager:v1.32.2
registry.k8s.io/kube-apiserver:v1.32.2
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-307816
localhost/kicbase/echo-server:functional-307816
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20250214-acbabc1a
docker.io/kindest/kindnetd:v20241212-9f82dd49
functional_test.go:286: (dbg) Stderr: out/minikube-linux-arm64 -p functional-307816 image ls --format short --alsologtostderr:
I0224 12:45:10.956703  602346 out.go:345] Setting OutFile to fd 1 ...
I0224 12:45:10.956909  602346 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0224 12:45:10.956955  602346 out.go:358] Setting ErrFile to fd 2...
I0224 12:45:10.956975  602346 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0224 12:45:10.957353  602346 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20451-568444/.minikube/bin
I0224 12:45:10.958961  602346 config.go:182] Loaded profile config "functional-307816": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0224 12:45:10.959177  602346 config.go:182] Loaded profile config "functional-307816": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0224 12:45:10.961377  602346 cli_runner.go:164] Run: docker container inspect functional-307816 --format={{.State.Status}}
I0224 12:45:10.990085  602346 ssh_runner.go:195] Run: systemctl --version
I0224 12:45:10.990149  602346 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-307816
I0224 12:45:11.015132  602346 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33516 SSHKeyPath:/home/jenkins/minikube-integration/20451-568444/.minikube/machines/functional-307816/id_rsa Username:docker}
I0224 12:45:11.103842  602346 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:278: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 image ls --format table --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-arm64 -p functional-307816 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/pause                   | 3.10               | afb61768ce381 | 520kB  |
| docker.io/kindest/kindnetd              | v20241212-9f82dd49 | e1181ee320546 | 99MB   |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| registry.k8s.io/etcd                    | 3.5.16-0           | 7fc9d4aa817aa | 143MB  |
| registry.k8s.io/kube-proxy              | v1.32.2            | e5aac5df76d9b | 98.3MB |
| registry.k8s.io/kube-scheduler          | v1.32.2            | 82dfa03f692fb | 69MB   |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
| docker.io/kindest/kindnetd              | v20250214-acbabc1a | ee75e27fff91c | 99MB   |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| registry.k8s.io/kube-apiserver          | v1.32.2            | 6417e1437b6d9 | 95MB   |
| registry.k8s.io/kube-controller-manager | v1.32.2            | 3c9285acfd2ff | 88.2MB |
| docker.io/library/nginx                 | latest             | 9b1b7be1ffa60 | 201MB  |
| localhost/minikube-local-cache-test     | functional-307816  | 09a9b06ef4a4d | 3.33kB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | 2f6c962e7b831 | 61.6MB |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
| docker.io/library/nginx                 | alpine             | cedb667e1a7b4 | 50.8MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| localhost/kicbase/echo-server           | functional-307816  | ce2d2cda2d858 | 4.79MB |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:286: (dbg) Stderr: out/minikube-linux-arm64 -p functional-307816 image ls --format table --alsologtostderr:
I0224 12:45:11.790570  602530 out.go:345] Setting OutFile to fd 1 ...
I0224 12:45:11.790719  602530 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0224 12:45:11.790724  602530 out.go:358] Setting ErrFile to fd 2...
I0224 12:45:11.790729  602530 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0224 12:45:11.791074  602530 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20451-568444/.minikube/bin
I0224 12:45:11.792157  602530 config.go:182] Loaded profile config "functional-307816": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0224 12:45:11.792294  602530 config.go:182] Loaded profile config "functional-307816": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0224 12:45:11.792987  602530 cli_runner.go:164] Run: docker container inspect functional-307816 --format={{.State.Status}}
I0224 12:45:11.819303  602530 ssh_runner.go:195] Run: systemctl --version
I0224 12:45:11.819433  602530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-307816
I0224 12:45:11.845207  602530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33516 SSHKeyPath:/home/jenkins/minikube-integration/20451-568444/.minikube/machines/functional-307816/id_rsa Username:docker}
I0224 12:45:11.939803  602530 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:278: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 image ls --format json --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-arm64 -p functional-307816 image ls --format json --alsologtostderr:
[{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"6417e1437b6d9a789e1ca789695a574e1df00a632bdbfbcae9695c9a7d500e32","repoDigests":["registry.k8s.io/kube-apiserver@sha256:22cdd0e13fe99dc2e5a3476b92895d89d81285cbe73b592033fa05b68c6c19a3","registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f"],"repoTags":["registry.k8s.io/kube-apiserver:v1.32.2"],"size":"94991840"},{"id":"3c9285acfd2ff7915bb451cc40ac060366ac519f3fef00c455f5aca0e0346c4d","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90","registry.k8s.io/kube-controller-manager@sha256:737052e0a84309cec4e9e3f1baaf80160273511c809893db40ab595e494a8777"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.32.2"],"size":"88241478"},{
"id":"e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062","repoDigests":["registry.k8s.io/kube-proxy@sha256:6b93583f4856ea0923c6fffd91c802a2362511378390acc6e539a419210ee23b","registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d"],"repoTags":["registry.k8s.io/kube-proxy:v1.32.2"],"size":"98313623"},{"id":"7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82","repoDigests":["registry.k8s.io/etcd@sha256:313adb872febec3a5740969d5bf1b1df0d222f8fa06675f34db5a7fc437356a1","registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5"],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"size":"143226622"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"e1181ee320546c66f17956a302db1b7899d88a593f116726718851133de588b6","r
epoDigests":["docker.io/kindest/kindnetd@sha256:564b9fd29e72542e4baa14b382d4d9ee22132141caa6b9803b71faf9a4a799be","docker.io/kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26"],"repoTags":["docker.io/kindest/kindnetd:v20241212-9f82dd49"],"size":"99018802"},{"id":"9b1b7be1ffa607d40d545607d3fdf441f08553468adec5588fb58499ad77fe58","repoDigests":["docker.io/library/nginx@sha256:91734281c0ebfc6f1aea979cffeed5079cfe786228a71cc6f1f46a228cde6e34","docker.io/library/nginx@sha256:a90c36c019f67444c32f4d361084d4b5b853dbf8701055df7908bf42cc613cdd"],"repoTags":["docker.io/library/nginx:latest"],"size":"201397159"},{"id":"09a9b06ef4a4de371752352a476dd81828d66e22f79e88f95dd95d40db7965c5","repoDigests":["localhost/minikube-local-cache-test@sha256:2086a2a904963b0ad602fd262665b8d91e5eba4939b6c0babbe7a21c3e31b922"],"repoTags":["localhost/minikube-local-cache-test:functional-307816"],"size":"3330"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.i
o/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["localhost/kicbase/echo-server:functional-307816"],"size":"4788229"},{"id":"2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:31440a2bef59e2f1ffb600113b557103740ff851e27b0aef5b849f6e3ab994a6","registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61647114"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","r
epoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:e50b7059b633caf3c1449b8da680d11845cda4506b513ee7a2de00725f0a34a7","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"519877"},{"id":"ee75e27fff91c8d59835f9a3efdf968ff404e580bad69746a65bcf3e304ab26f","repoDigests":["docker.io/kindest/kindnetd@sha256:86c933f3845d6a993c8f64632752b10aae67a4756c59096b3259426e839be955","docker.io/kindest/kindnetd@sha256:e3c42406b0806c1f7e8a66838377936cbd2cdfd94d9b26a3eefedada8713d495"],"repoTags":["docker.io/kindest/kindnetd:v20250214-acbabc1a"],"size":"99018290"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f0
7a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"cedb667e1a7b4e6d843a4f74f1f2db0dac1c29b43978aa72dbae2193e3b8eea3","repoDigests":["docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591","docker.io/library/nginx@sha256:56568860b56c0bc8099fe1b2d84f43a18939e217e6c619126214c0f71bc27626"],"repoTags":["docker.io/library/nginx:alpine"],"size":"50780648"},{"id":"82dfa03f692fb5d84f66c17d6ee9126b081182152b25d28ea456d89b7d5d8911","repoDigests":["registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76","registry.k8s.io/kube-scheduler@sha256:a532964581fdb02b9d692589bb93db7d4b8a7bd8c120d8fb70803da0e3c83647"],"repoTags":["registry.k8s.io/kube-scheduler:v1.32.2"],"size":"68973894"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/
kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"}]
functional_test.go:286: (dbg) Stderr: out/minikube-linux-arm64 -p functional-307816 image ls --format json --alsologtostderr:
I0224 12:45:11.515186  602470 out.go:345] Setting OutFile to fd 1 ...
I0224 12:45:11.515371  602470 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0224 12:45:11.515398  602470 out.go:358] Setting ErrFile to fd 2...
I0224 12:45:11.515405  602470 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0224 12:45:11.515702  602470 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20451-568444/.minikube/bin
I0224 12:45:11.516482  602470 config.go:182] Loaded profile config "functional-307816": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0224 12:45:11.516649  602470 config.go:182] Loaded profile config "functional-307816": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0224 12:45:11.518709  602470 cli_runner.go:164] Run: docker container inspect functional-307816 --format={{.State.Status}}
I0224 12:45:11.536806  602470 ssh_runner.go:195] Run: systemctl --version
I0224 12:45:11.536871  602470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-307816
I0224 12:45:11.558792  602470 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33516 SSHKeyPath:/home/jenkins/minikube-integration/20451-568444/.minikube/machines/functional-307816/id_rsa Username:docker}
I0224 12:45:11.651774  602470 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:278: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 image ls --format yaml --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-arm64 -p functional-307816 image ls --format yaml --alsologtostderr:
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: 3c9285acfd2ff7915bb451cc40ac060366ac519f3fef00c455f5aca0e0346c4d
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90
- registry.k8s.io/kube-controller-manager@sha256:737052e0a84309cec4e9e3f1baaf80160273511c809893db40ab595e494a8777
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.2
size: "88241478"
- id: 82dfa03f692fb5d84f66c17d6ee9126b081182152b25d28ea456d89b7d5d8911
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76
- registry.k8s.io/kube-scheduler@sha256:a532964581fdb02b9d692589bb93db7d4b8a7bd8c120d8fb70803da0e3c83647
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.2
size: "68973894"
- id: e1181ee320546c66f17956a302db1b7899d88a593f116726718851133de588b6
repoDigests:
- docker.io/kindest/kindnetd@sha256:564b9fd29e72542e4baa14b382d4d9ee22132141caa6b9803b71faf9a4a799be
- docker.io/kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26
repoTags:
- docker.io/kindest/kindnetd:v20241212-9f82dd49
size: "99018802"
- id: cedb667e1a7b4e6d843a4f74f1f2db0dac1c29b43978aa72dbae2193e3b8eea3
repoDigests:
- docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591
- docker.io/library/nginx@sha256:56568860b56c0bc8099fe1b2d84f43a18939e217e6c619126214c0f71bc27626
repoTags:
- docker.io/library/nginx:alpine
size: "50780648"
- id: 9b1b7be1ffa607d40d545607d3fdf441f08553468adec5588fb58499ad77fe58
repoDigests:
- docker.io/library/nginx@sha256:91734281c0ebfc6f1aea979cffeed5079cfe786228a71cc6f1f46a228cde6e34
- docker.io/library/nginx@sha256:a90c36c019f67444c32f4d361084d4b5b853dbf8701055df7908bf42cc613cdd
repoTags:
- docker.io/library/nginx:latest
size: "201397159"
- id: 09a9b06ef4a4de371752352a476dd81828d66e22f79e88f95dd95d40db7965c5
repoDigests:
- localhost/minikube-local-cache-test@sha256:2086a2a904963b0ad602fd262665b8d91e5eba4939b6c0babbe7a21c3e31b922
repoTags:
- localhost/minikube-local-cache-test:functional-307816
size: "3330"
- id: 2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:31440a2bef59e2f1ffb600113b557103740ff851e27b0aef5b849f6e3ab994a6
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61647114"
- id: 6417e1437b6d9a789e1ca789695a574e1df00a632bdbfbcae9695c9a7d500e32
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:22cdd0e13fe99dc2e5a3476b92895d89d81285cbe73b592033fa05b68c6c19a3
- registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.2
size: "94991840"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: 7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82
repoDigests:
- registry.k8s.io/etcd@sha256:313adb872febec3a5740969d5bf1b1df0d222f8fa06675f34db5a7fc437356a1
- registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "143226622"
- id: e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062
repoDigests:
- registry.k8s.io/kube-proxy@sha256:6b93583f4856ea0923c6fffd91c802a2362511378390acc6e539a419210ee23b
- registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d
repoTags:
- registry.k8s.io/kube-proxy:v1.32.2
size: "98313623"
- id: ee75e27fff91c8d59835f9a3efdf968ff404e580bad69746a65bcf3e304ab26f
repoDigests:
- docker.io/kindest/kindnetd@sha256:86c933f3845d6a993c8f64632752b10aae67a4756c59096b3259426e839be955
- docker.io/kindest/kindnetd@sha256:e3c42406b0806c1f7e8a66838377936cbd2cdfd94d9b26a3eefedada8713d495
repoTags:
- docker.io/kindest/kindnetd:v20250214-acbabc1a
size: "99018290"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- localhost/kicbase/echo-server:functional-307816
size: "4788229"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:e50b7059b633caf3c1449b8da680d11845cda4506b513ee7a2de00725f0a34a7
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "519877"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"

                                                
                                                
functional_test.go:286: (dbg) Stderr: out/minikube-linux-arm64 -p functional-307816 image ls --format yaml --alsologtostderr:
I0224 12:45:11.213526  602384 out.go:345] Setting OutFile to fd 1 ...
I0224 12:45:11.213655  602384 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0224 12:45:11.213661  602384 out.go:358] Setting ErrFile to fd 2...
I0224 12:45:11.213665  602384 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0224 12:45:11.213970  602384 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20451-568444/.minikube/bin
I0224 12:45:11.214795  602384 config.go:182] Loaded profile config "functional-307816": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0224 12:45:11.214918  602384 config.go:182] Loaded profile config "functional-307816": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0224 12:45:11.215489  602384 cli_runner.go:164] Run: docker container inspect functional-307816 --format={{.State.Status}}
I0224 12:45:11.250223  602384 ssh_runner.go:195] Run: systemctl --version
I0224 12:45:11.250285  602384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-307816
I0224 12:45:11.275312  602384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33516 SSHKeyPath:/home/jenkins/minikube-integration/20451-568444/.minikube/machines/functional-307816/id_rsa Username:docker}
I0224 12:45:11.368120  602384 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 ssh pgrep buildkitd
functional_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-307816 ssh pgrep buildkitd: exit status 1 (360.883473ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:332: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 image build -t localhost/my-image:functional-307816 testdata/build --alsologtostderr
functional_test.go:332: (dbg) Done: out/minikube-linux-arm64 -p functional-307816 image build -t localhost/my-image:functional-307816 testdata/build --alsologtostderr: (2.968039807s)
functional_test.go:337: (dbg) Stdout: out/minikube-linux-arm64 -p functional-307816 image build -t localhost/my-image:functional-307816 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 7477728824d
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-307816
--> 6f1ab16c372
Successfully tagged localhost/my-image:functional-307816
6f1ab16c372c3c1c302f6e2e0f6cff1c05101e30451e3e5b122bbeb6133aca4a
functional_test.go:340: (dbg) Stderr: out/minikube-linux-arm64 -p functional-307816 image build -t localhost/my-image:functional-307816 testdata/build --alsologtostderr:
I0224 12:45:11.628862  602497 out.go:345] Setting OutFile to fd 1 ...
I0224 12:45:11.629718  602497 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0224 12:45:11.629733  602497 out.go:358] Setting ErrFile to fd 2...
I0224 12:45:11.629740  602497 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0224 12:45:11.630053  602497 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20451-568444/.minikube/bin
I0224 12:45:11.630770  602497 config.go:182] Loaded profile config "functional-307816": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0224 12:45:11.632084  602497 config.go:182] Loaded profile config "functional-307816": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0224 12:45:11.632794  602497 cli_runner.go:164] Run: docker container inspect functional-307816 --format={{.State.Status}}
I0224 12:45:11.653965  602497 ssh_runner.go:195] Run: systemctl --version
I0224 12:45:11.654017  602497 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-307816
I0224 12:45:11.676253  602497 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33516 SSHKeyPath:/home/jenkins/minikube-integration/20451-568444/.minikube/machines/functional-307816/id_rsa Username:docker}
I0224 12:45:11.772000  602497 build_images.go:161] Building image from path: /tmp/build.553353232.tar
I0224 12:45:11.772147  602497 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0224 12:45:11.782782  602497 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.553353232.tar
I0224 12:45:11.787311  602497 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.553353232.tar: stat -c "%s %y" /var/lib/minikube/build/build.553353232.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.553353232.tar': No such file or directory
I0224 12:45:11.787341  602497 ssh_runner.go:362] scp /tmp/build.553353232.tar --> /var/lib/minikube/build/build.553353232.tar (3072 bytes)
I0224 12:45:11.819441  602497 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.553353232
I0224 12:45:11.829132  602497 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.553353232 -xf /var/lib/minikube/build/build.553353232.tar
I0224 12:45:11.840087  602497 crio.go:315] Building image: /var/lib/minikube/build/build.553353232
I0224 12:45:11.840174  602497 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-307816 /var/lib/minikube/build/build.553353232 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I0224 12:45:14.507405  602497 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-307816 /var/lib/minikube/build/build.553353232 --cgroup-manager=cgroupfs: (2.667201345s)
I0224 12:45:14.507471  602497 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.553353232
I0224 12:45:14.516408  602497 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.553353232.tar
I0224 12:45:14.525647  602497 build_images.go:217] Built localhost/my-image:functional-307816 from /tmp/build.553353232.tar
I0224 12:45:14.525681  602497 build_images.go:133] succeeded building to: functional-307816
I0224 12:45:14.525687  602497 build_images.go:134] failed building to: 
functional_test.go:468: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:359: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:364: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-307816
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:372: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 image load --daemon kicbase/echo-server:functional-307816 --alsologtostderr
functional_test.go:372: (dbg) Done: out/minikube-linux-arm64 -p functional-307816 image load --daemon kicbase/echo-server:functional-307816 --alsologtostderr: (1.515374952s)
functional_test.go:468: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 image load --daemon kicbase/echo-server:functional-307816 --alsologtostderr
functional_test.go:382: (dbg) Done: out/minikube-linux-arm64 -p functional-307816 image load --daemon kicbase/echo-server:functional-307816 --alsologtostderr: (1.027524299s)
functional_test.go:468: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:252: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:257: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-307816
functional_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 image load --daemon kicbase/echo-server:functional-307816 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:397: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 image save kicbase/echo-server:functional-307816 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 image rm kicbase/echo-server:functional-307816 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:426: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 image ls
2025/02/24 12:45:08 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:436: (dbg) Run:  docker rmi kicbase/echo-server:functional-307816
functional_test.go:441: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 image save --daemon kicbase/echo-server:functional-307816 --alsologtostderr
functional_test.go:449: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-307816
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2136: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2136: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2136: (dbg) Run:  out/minikube-linux-arm64 -p functional-307816 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-307816
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:215: (dbg) Run:  docker rmi -f localhost/my-image:functional-307816
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:223: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-307816
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (177.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-765496 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0224 12:45:19.399373  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/addons-961822/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:47:35.535929  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/addons-961822/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:48:03.241725  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/addons-961822/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-765496 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (2m56.340266312s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-765496 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (177.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (8.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-765496 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-765496 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-765496 -- rollout status deployment/busybox: (5.209426004s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-765496 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-765496 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-765496 -- exec busybox-58667487b6-5w47t -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-765496 -- exec busybox-58667487b6-79rtf -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-765496 -- exec busybox-58667487b6-c86nb -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-765496 -- exec busybox-58667487b6-5w47t -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-765496 -- exec busybox-58667487b6-79rtf -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-765496 -- exec busybox-58667487b6-c86nb -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-765496 -- exec busybox-58667487b6-5w47t -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-765496 -- exec busybox-58667487b6-79rtf -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-765496 -- exec busybox-58667487b6-c86nb -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (8.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-765496 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-765496 -- exec busybox-58667487b6-5w47t -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-765496 -- exec busybox-58667487b6-5w47t -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-765496 -- exec busybox-58667487b6-79rtf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-765496 -- exec busybox-58667487b6-79rtf -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-765496 -- exec busybox-58667487b6-c86nb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-765496 -- exec busybox-58667487b6-c86nb -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (37.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-765496 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-765496 -v=7 --alsologtostderr: (36.420591735s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-765496 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (37.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-765496 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.035734201s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (18.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-765496 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-765496 cp testdata/cp-test.txt ha-765496:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-765496 ssh -n ha-765496 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-765496 cp ha-765496:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile608900053/001/cp-test_ha-765496.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-765496 ssh -n ha-765496 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-765496 cp ha-765496:/home/docker/cp-test.txt ha-765496-m02:/home/docker/cp-test_ha-765496_ha-765496-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-765496 ssh -n ha-765496 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-765496 ssh -n ha-765496-m02 "sudo cat /home/docker/cp-test_ha-765496_ha-765496-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-765496 cp ha-765496:/home/docker/cp-test.txt ha-765496-m03:/home/docker/cp-test_ha-765496_ha-765496-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-765496 ssh -n ha-765496 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-765496 ssh -n ha-765496-m03 "sudo cat /home/docker/cp-test_ha-765496_ha-765496-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-765496 cp ha-765496:/home/docker/cp-test.txt ha-765496-m04:/home/docker/cp-test_ha-765496_ha-765496-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-765496 ssh -n ha-765496 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-765496 ssh -n ha-765496-m04 "sudo cat /home/docker/cp-test_ha-765496_ha-765496-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-765496 cp testdata/cp-test.txt ha-765496-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-765496 ssh -n ha-765496-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-765496 cp ha-765496-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile608900053/001/cp-test_ha-765496-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-765496 ssh -n ha-765496-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-765496 cp ha-765496-m02:/home/docker/cp-test.txt ha-765496:/home/docker/cp-test_ha-765496-m02_ha-765496.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-765496 ssh -n ha-765496-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-765496 ssh -n ha-765496 "sudo cat /home/docker/cp-test_ha-765496-m02_ha-765496.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-765496 cp ha-765496-m02:/home/docker/cp-test.txt ha-765496-m03:/home/docker/cp-test_ha-765496-m02_ha-765496-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-765496 ssh -n ha-765496-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-765496 ssh -n ha-765496-m03 "sudo cat /home/docker/cp-test_ha-765496-m02_ha-765496-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-765496 cp ha-765496-m02:/home/docker/cp-test.txt ha-765496-m04:/home/docker/cp-test_ha-765496-m02_ha-765496-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-765496 ssh -n ha-765496-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-765496 ssh -n ha-765496-m04 "sudo cat /home/docker/cp-test_ha-765496-m02_ha-765496-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-765496 cp testdata/cp-test.txt ha-765496-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-765496 ssh -n ha-765496-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-765496 cp ha-765496-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile608900053/001/cp-test_ha-765496-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-765496 ssh -n ha-765496-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-765496 cp ha-765496-m03:/home/docker/cp-test.txt ha-765496:/home/docker/cp-test_ha-765496-m03_ha-765496.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-765496 ssh -n ha-765496-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-765496 ssh -n ha-765496 "sudo cat /home/docker/cp-test_ha-765496-m03_ha-765496.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-765496 cp ha-765496-m03:/home/docker/cp-test.txt ha-765496-m02:/home/docker/cp-test_ha-765496-m03_ha-765496-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-765496 ssh -n ha-765496-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-765496 ssh -n ha-765496-m02 "sudo cat /home/docker/cp-test_ha-765496-m03_ha-765496-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-765496 cp ha-765496-m03:/home/docker/cp-test.txt ha-765496-m04:/home/docker/cp-test_ha-765496-m03_ha-765496-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-765496 ssh -n ha-765496-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-765496 ssh -n ha-765496-m04 "sudo cat /home/docker/cp-test_ha-765496-m03_ha-765496-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-765496 cp testdata/cp-test.txt ha-765496-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-765496 ssh -n ha-765496-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-765496 cp ha-765496-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile608900053/001/cp-test_ha-765496-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-765496 ssh -n ha-765496-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-765496 cp ha-765496-m04:/home/docker/cp-test.txt ha-765496:/home/docker/cp-test_ha-765496-m04_ha-765496.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-765496 ssh -n ha-765496-m04 "sudo cat /home/docker/cp-test.txt"
E0224 12:49:19.622345  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/functional-307816/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:49:19.630417  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/functional-307816/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:49:19.641763  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/functional-307816/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:49:19.663164  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/functional-307816/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:49:19.704515  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/functional-307816/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:49:19.785860  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/functional-307816/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-765496 ssh -n ha-765496 "sudo cat /home/docker/cp-test_ha-765496-m04_ha-765496.txt"
E0224 12:49:19.947731  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/functional-307816/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-765496 cp ha-765496-m04:/home/docker/cp-test.txt ha-765496-m02:/home/docker/cp-test_ha-765496-m04_ha-765496-m02.txt
E0224 12:49:20.269257  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/functional-307816/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-765496 ssh -n ha-765496-m04 "sudo cat /home/docker/cp-test.txt"
E0224 12:49:20.911154  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/functional-307816/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-765496 ssh -n ha-765496-m02 "sudo cat /home/docker/cp-test_ha-765496-m04_ha-765496-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-765496 cp ha-765496-m04:/home/docker/cp-test.txt ha-765496-m03:/home/docker/cp-test_ha-765496-m04_ha-765496-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-765496 ssh -n ha-765496-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-765496 ssh -n ha-765496-m03 "sudo cat /home/docker/cp-test_ha-765496-m04_ha-765496-m03.txt"
E0224 12:49:22.192657  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/functional-307816/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/CopyFile (18.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-765496 node stop m02 -v=7 --alsologtostderr
E0224 12:49:24.754859  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/functional-307816/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:49:29.876774  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/functional-307816/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-765496 node stop m02 -v=7 --alsologtostderr: (11.968949989s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-765496 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-765496 status -v=7 --alsologtostderr: exit status 7 (734.425671ms)

                                                
                                                
-- stdout --
	ha-765496
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-765496-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-765496-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-765496-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0224 12:49:34.399932  618313 out.go:345] Setting OutFile to fd 1 ...
	I0224 12:49:34.400197  618313 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0224 12:49:34.400226  618313 out.go:358] Setting ErrFile to fd 2...
	I0224 12:49:34.400269  618313 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0224 12:49:34.400619  618313 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20451-568444/.minikube/bin
	I0224 12:49:34.400906  618313 out.go:352] Setting JSON to false
	I0224 12:49:34.401007  618313 mustload.go:65] Loading cluster: ha-765496
	I0224 12:49:34.401689  618313 config.go:182] Loaded profile config "ha-765496": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0224 12:49:34.401760  618313 status.go:174] checking status of ha-765496 ...
	I0224 12:49:34.402601  618313 cli_runner.go:164] Run: docker container inspect ha-765496 --format={{.State.Status}}
	I0224 12:49:34.407428  618313 notify.go:220] Checking for updates...
	I0224 12:49:34.430448  618313 status.go:371] ha-765496 host status = "Running" (err=<nil>)
	I0224 12:49:34.430471  618313 host.go:66] Checking if "ha-765496" exists ...
	I0224 12:49:34.430771  618313 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-765496
	I0224 12:49:34.459478  618313 host.go:66] Checking if "ha-765496" exists ...
	I0224 12:49:34.459784  618313 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0224 12:49:34.459830  618313 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-765496
	I0224 12:49:34.481855  618313 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33521 SSHKeyPath:/home/jenkins/minikube-integration/20451-568444/.minikube/machines/ha-765496/id_rsa Username:docker}
	I0224 12:49:34.572635  618313 ssh_runner.go:195] Run: systemctl --version
	I0224 12:49:34.577116  618313 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0224 12:49:34.588859  618313 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0224 12:49:34.658615  618313 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:true NGoroutines:72 SystemTime:2025-02-24 12:49:34.649003045 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.21.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.33.0]] Warnings:<nil>}}
	I0224 12:49:34.659332  618313 kubeconfig.go:125] found "ha-765496" server: "https://192.168.49.254:8443"
	I0224 12:49:34.659385  618313 api_server.go:166] Checking apiserver status ...
	I0224 12:49:34.659446  618313 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 12:49:34.670906  618313 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1374/cgroup
	I0224 12:49:34.680502  618313 api_server.go:182] apiserver freezer: "13:freezer:/docker/3fb42162f705b0edb2c382c9bf06a3d42a3509c9419a8372edb1e9edf6efe33b/crio/crio-ca7500badec0356d45808d15bd639b25284790a1bb6c3fc0a16fd7cdd288bdd0"
	I0224 12:49:34.680574  618313 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/3fb42162f705b0edb2c382c9bf06a3d42a3509c9419a8372edb1e9edf6efe33b/crio/crio-ca7500badec0356d45808d15bd639b25284790a1bb6c3fc0a16fd7cdd288bdd0/freezer.state
	I0224 12:49:34.689302  618313 api_server.go:204] freezer state: "THAWED"
	I0224 12:49:34.689343  618313 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0224 12:49:34.697766  618313 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0224 12:49:34.697796  618313 status.go:463] ha-765496 apiserver status = Running (err=<nil>)
	I0224 12:49:34.697807  618313 status.go:176] ha-765496 status: &{Name:ha-765496 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0224 12:49:34.697824  618313 status.go:174] checking status of ha-765496-m02 ...
	I0224 12:49:34.698204  618313 cli_runner.go:164] Run: docker container inspect ha-765496-m02 --format={{.State.Status}}
	I0224 12:49:34.719050  618313 status.go:371] ha-765496-m02 host status = "Stopped" (err=<nil>)
	I0224 12:49:34.719080  618313 status.go:384] host is not running, skipping remaining checks
	I0224 12:49:34.719087  618313 status.go:176] ha-765496-m02 status: &{Name:ha-765496-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0224 12:49:34.719107  618313 status.go:174] checking status of ha-765496-m03 ...
	I0224 12:49:34.719519  618313 cli_runner.go:164] Run: docker container inspect ha-765496-m03 --format={{.State.Status}}
	I0224 12:49:34.736631  618313 status.go:371] ha-765496-m03 host status = "Running" (err=<nil>)
	I0224 12:49:34.736659  618313 host.go:66] Checking if "ha-765496-m03" exists ...
	I0224 12:49:34.736985  618313 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-765496-m03
	I0224 12:49:34.753942  618313 host.go:66] Checking if "ha-765496-m03" exists ...
	I0224 12:49:34.754255  618313 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0224 12:49:34.754301  618313 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-765496-m03
	I0224 12:49:34.772200  618313 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33531 SSHKeyPath:/home/jenkins/minikube-integration/20451-568444/.minikube/machines/ha-765496-m03/id_rsa Username:docker}
	I0224 12:49:34.860592  618313 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0224 12:49:34.873635  618313 kubeconfig.go:125] found "ha-765496" server: "https://192.168.49.254:8443"
	I0224 12:49:34.873667  618313 api_server.go:166] Checking apiserver status ...
	I0224 12:49:34.873708  618313 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 12:49:34.884425  618313 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1297/cgroup
	I0224 12:49:34.893609  618313 api_server.go:182] apiserver freezer: "13:freezer:/docker/dd74b0aa3ee9bc81bd8de2b03482f82ca52373ae0f8fef4a1ea2511056a18c22/crio/crio-83a2afdd299aa43f3727443ca493b064b2bcdc9767c4920c38aa085e8a3380e6"
	I0224 12:49:34.893682  618313 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/dd74b0aa3ee9bc81bd8de2b03482f82ca52373ae0f8fef4a1ea2511056a18c22/crio/crio-83a2afdd299aa43f3727443ca493b064b2bcdc9767c4920c38aa085e8a3380e6/freezer.state
	I0224 12:49:34.902931  618313 api_server.go:204] freezer state: "THAWED"
	I0224 12:49:34.903015  618313 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0224 12:49:34.911451  618313 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0224 12:49:34.911479  618313 status.go:463] ha-765496-m03 apiserver status = Running (err=<nil>)
	I0224 12:49:34.911489  618313 status.go:176] ha-765496-m03 status: &{Name:ha-765496-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0224 12:49:34.911505  618313 status.go:174] checking status of ha-765496-m04 ...
	I0224 12:49:34.911808  618313 cli_runner.go:164] Run: docker container inspect ha-765496-m04 --format={{.State.Status}}
	I0224 12:49:34.930388  618313 status.go:371] ha-765496-m04 host status = "Running" (err=<nil>)
	I0224 12:49:34.930415  618313 host.go:66] Checking if "ha-765496-m04" exists ...
	I0224 12:49:34.930721  618313 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-765496-m04
	I0224 12:49:34.947727  618313 host.go:66] Checking if "ha-765496-m04" exists ...
	I0224 12:49:34.948029  618313 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0224 12:49:34.948077  618313 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-765496-m04
	I0224 12:49:34.964925  618313 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/20451-568444/.minikube/machines/ha-765496-m04/id_rsa Username:docker}
	I0224 12:49:35.060933  618313 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0224 12:49:35.076908  618313 status.go:176] ha-765496-m04 status: &{Name:ha-765496-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (23.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-765496 node start m02 -v=7 --alsologtostderr
E0224 12:49:40.118384  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/functional-307816/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-765496 node start m02 -v=7 --alsologtostderr: (22.160875342s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-765496 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-765496 status -v=7 --alsologtostderr: (1.585439214s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (23.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
E0224 12:50:00.600723  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/functional-307816/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.587635598s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (182.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-765496 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-765496 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 stop -p ha-765496 -v=7 --alsologtostderr: (37.251328453s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 start -p ha-765496 --wait=true -v=7 --alsologtostderr
E0224 12:50:41.562097  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/functional-307816/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:52:03.483473  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/functional-307816/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:52:35.535797  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/addons-961822/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 start -p ha-765496 --wait=true -v=7 --alsologtostderr: (2m24.739548565s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-765496
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (182.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-765496 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-765496 node delete m03 -v=7 --alsologtostderr: (11.868193739s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-765496 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-765496 stop -v=7 --alsologtostderr
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-765496 stop -v=7 --alsologtostderr: (35.702883423s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-765496 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-765496 status -v=7 --alsologtostderr: exit status 7 (112.94092ms)

                                                
                                                
-- stdout --
	ha-765496
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-765496-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-765496-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0224 12:53:52.834444  632438 out.go:345] Setting OutFile to fd 1 ...
	I0224 12:53:52.834589  632438 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0224 12:53:52.834600  632438 out.go:358] Setting ErrFile to fd 2...
	I0224 12:53:52.834606  632438 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0224 12:53:52.834948  632438 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20451-568444/.minikube/bin
	I0224 12:53:52.835172  632438 out.go:352] Setting JSON to false
	I0224 12:53:52.835206  632438 mustload.go:65] Loading cluster: ha-765496
	I0224 12:53:52.835434  632438 notify.go:220] Checking for updates...
	I0224 12:53:52.835969  632438 config.go:182] Loaded profile config "ha-765496": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0224 12:53:52.836023  632438 status.go:174] checking status of ha-765496 ...
	I0224 12:53:52.836630  632438 cli_runner.go:164] Run: docker container inspect ha-765496 --format={{.State.Status}}
	I0224 12:53:52.856829  632438 status.go:371] ha-765496 host status = "Stopped" (err=<nil>)
	I0224 12:53:52.856866  632438 status.go:384] host is not running, skipping remaining checks
	I0224 12:53:52.856875  632438 status.go:176] ha-765496 status: &{Name:ha-765496 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0224 12:53:52.856902  632438 status.go:174] checking status of ha-765496-m02 ...
	I0224 12:53:52.857202  632438 cli_runner.go:164] Run: docker container inspect ha-765496-m02 --format={{.State.Status}}
	I0224 12:53:52.881020  632438 status.go:371] ha-765496-m02 host status = "Stopped" (err=<nil>)
	I0224 12:53:52.881043  632438 status.go:384] host is not running, skipping remaining checks
	I0224 12:53:52.881050  632438 status.go:176] ha-765496-m02 status: &{Name:ha-765496-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0224 12:53:52.881072  632438 status.go:174] checking status of ha-765496-m04 ...
	I0224 12:53:52.881388  632438 cli_runner.go:164] Run: docker container inspect ha-765496-m04 --format={{.State.Status}}
	I0224 12:53:52.898110  632438 status.go:371] ha-765496-m04 host status = "Stopped" (err=<nil>)
	I0224 12:53:52.898133  632438 status.go:384] host is not running, skipping remaining checks
	I0224 12:53:52.898151  632438 status.go:176] ha-765496-m04 status: &{Name:ha-765496-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (58.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 start -p ha-765496 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0224 12:54:19.623388  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/functional-307816/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:54:47.325747  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/functional-307816/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 start -p ha-765496 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (57.003198441s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-765496 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (58.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (74.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-765496 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 node add -p ha-765496 --control-plane -v=7 --alsologtostderr: (1m13.673659604s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-765496 status -v=7 --alsologtostderr
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-765496 status -v=7 --alsologtostderr: (1.003500958s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (74.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.99s)

                                                
                                    
x
+
TestJSONOutput/start/Command (47.84s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-147395 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-147395 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (47.837864494s)
--- PASS: TestJSONOutput/start/Command (47.84s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.74s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-147395 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.74s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-147395 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.87s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-147395 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-147395 --output=json --user=testUser: (5.866002658s)
--- PASS: TestJSONOutput/stop/Command (5.87s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-614006 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-614006 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (98.629642ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"363bbc16-9a65-4990-b234-5419b8c99560","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-614006] minikube v1.35.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e0bdc59f-107c-47da-95dc-5ca2a449a44d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20451"}}
	{"specversion":"1.0","id":"fbf5d2b9-ff98-4e66-874d-25cc82c13e2b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0be0e987-224f-4222-a05c-6cff21ef4e8b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20451-568444/kubeconfig"}}
	{"specversion":"1.0","id":"136770e4-2d3c-4159-8254-ec6e9c4fc3bc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20451-568444/.minikube"}}
	{"specversion":"1.0","id":"c7628790-a484-420c-8262-c4c9cc77cce1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"9032926d-27ca-4aec-a91c-ce92fb2c62ec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"966f4333-69df-4aab-929a-cc9e0819fe60","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-614006" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-614006
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (36.98s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-348828 --network=
E0224 12:57:35.535783  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/addons-961822/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-348828 --network=: (34.812689241s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-348828" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-348828
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-348828: (2.140804275s)
--- PASS: TestKicCustomNetwork/create_custom_network (36.98s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (35.73s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-069586 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-069586 --network=bridge: (33.672760535s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-069586" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-069586
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-069586: (2.027150166s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (35.73s)

                                                
                                    
x
+
TestKicExistingNetwork (30.65s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0224 12:58:28.991464  573823 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0224 12:58:29.008233  573823 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0224 12:58:29.009124  573823 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0224 12:58:29.009167  573823 cli_runner.go:164] Run: docker network inspect existing-network
W0224 12:58:29.025398  573823 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0224 12:58:29.025431  573823 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0224 12:58:29.025446  573823 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0224 12:58:29.025554  573823 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0224 12:58:29.043612  573823 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-4c75ba14a7c8 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:da:f0:35:53:8e:bd} reservation:<nil>}
I0224 12:58:29.044007  573823 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001c59910}
I0224 12:58:29.044036  573823 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0224 12:58:29.044101  573823 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0224 12:58:29.104294  573823 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-320094 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-320094 --network=existing-network: (28.48546521s)
helpers_test.go:175: Cleaning up "existing-network-320094" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-320094
E0224 12:58:58.603179  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/addons-961822/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-320094: (2.018122203s)
I0224 12:58:59.624423  573823 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (30.65s)

                                                
                                    
x
+
TestKicCustomSubnet (33.3s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-949664 --subnet=192.168.60.0/24
E0224 12:59:19.621796  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/functional-307816/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-949664 --subnet=192.168.60.0/24: (31.074329655s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-949664 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-949664" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-949664
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-949664: (2.202987696s)
--- PASS: TestKicCustomSubnet (33.30s)

                                                
                                    
x
+
TestKicStaticIP (35.77s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-937720 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-937720 --static-ip=192.168.200.200: (33.462861374s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-937720 ip
helpers_test.go:175: Cleaning up "static-ip-937720" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-937720
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-937720: (2.151170032s)
--- PASS: TestKicStaticIP (35.77s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (68.37s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-090803 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-090803 --driver=docker  --container-runtime=crio: (29.394043071s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-093502 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-093502 --driver=docker  --container-runtime=crio: (33.302525568s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-090803
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-093502
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-093502" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-093502
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-093502: (1.967326604s)
helpers_test.go:175: Cleaning up "first-090803" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-090803
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-090803: (2.335097234s)
--- PASS: TestMinikubeProfile (68.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.24s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-574667 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-574667 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.239785923s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.24s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-574667 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.84s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-576554 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-576554 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.83761074s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.84s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-576554 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-574667 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-574667 --alsologtostderr -v=5: (1.622776281s)
--- PASS: TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-576554 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-576554
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-576554: (1.207982673s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.65s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-576554
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-576554: (6.64935214s)
--- PASS: TestMountStart/serial/RestartStopped (7.65s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-576554 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (77.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-901147 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0224 13:02:35.535880  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/addons-961822/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-901147 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m17.446058587s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-901147 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (77.96s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-901147 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-901147 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-901147 -- rollout status deployment/busybox: (3.925654527s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-901147 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-901147 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-901147 -- exec busybox-58667487b6-ccrrz -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-901147 -- exec busybox-58667487b6-db57d -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-901147 -- exec busybox-58667487b6-ccrrz -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-901147 -- exec busybox-58667487b6-db57d -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-901147 -- exec busybox-58667487b6-ccrrz -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-901147 -- exec busybox-58667487b6-db57d -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.77s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-901147 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-901147 -- exec busybox-58667487b6-ccrrz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-901147 -- exec busybox-58667487b6-ccrrz -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-901147 -- exec busybox-58667487b6-db57d -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-901147 -- exec busybox-58667487b6-db57d -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.03s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (29.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-901147 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-901147 -v 3 --alsologtostderr: (28.481717521s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-901147 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (29.16s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-901147 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.67s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-901147 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-901147 cp testdata/cp-test.txt multinode-901147:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-901147 ssh -n multinode-901147 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-901147 cp multinode-901147:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile403808603/001/cp-test_multinode-901147.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-901147 ssh -n multinode-901147 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-901147 cp multinode-901147:/home/docker/cp-test.txt multinode-901147-m02:/home/docker/cp-test_multinode-901147_multinode-901147-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-901147 ssh -n multinode-901147 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-901147 ssh -n multinode-901147-m02 "sudo cat /home/docker/cp-test_multinode-901147_multinode-901147-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-901147 cp multinode-901147:/home/docker/cp-test.txt multinode-901147-m03:/home/docker/cp-test_multinode-901147_multinode-901147-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-901147 ssh -n multinode-901147 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-901147 ssh -n multinode-901147-m03 "sudo cat /home/docker/cp-test_multinode-901147_multinode-901147-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-901147 cp testdata/cp-test.txt multinode-901147-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-901147 ssh -n multinode-901147-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-901147 cp multinode-901147-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile403808603/001/cp-test_multinode-901147-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-901147 ssh -n multinode-901147-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-901147 cp multinode-901147-m02:/home/docker/cp-test.txt multinode-901147:/home/docker/cp-test_multinode-901147-m02_multinode-901147.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-901147 ssh -n multinode-901147-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-901147 ssh -n multinode-901147 "sudo cat /home/docker/cp-test_multinode-901147-m02_multinode-901147.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-901147 cp multinode-901147-m02:/home/docker/cp-test.txt multinode-901147-m03:/home/docker/cp-test_multinode-901147-m02_multinode-901147-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-901147 ssh -n multinode-901147-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-901147 ssh -n multinode-901147-m03 "sudo cat /home/docker/cp-test_multinode-901147-m02_multinode-901147-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-901147 cp testdata/cp-test.txt multinode-901147-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-901147 ssh -n multinode-901147-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-901147 cp multinode-901147-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile403808603/001/cp-test_multinode-901147-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-901147 ssh -n multinode-901147-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-901147 cp multinode-901147-m03:/home/docker/cp-test.txt multinode-901147:/home/docker/cp-test_multinode-901147-m03_multinode-901147.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-901147 ssh -n multinode-901147-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-901147 ssh -n multinode-901147 "sudo cat /home/docker/cp-test_multinode-901147-m03_multinode-901147.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-901147 cp multinode-901147-m03:/home/docker/cp-test.txt multinode-901147-m02:/home/docker/cp-test_multinode-901147-m03_multinode-901147-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-901147 ssh -n multinode-901147-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-901147 ssh -n multinode-901147-m02 "sudo cat /home/docker/cp-test_multinode-901147-m03_multinode-901147-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.15s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-901147 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-901147 node stop m03: (1.209167816s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-901147 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-901147 status: exit status 7 (516.427894ms)

                                                
                                                
-- stdout --
	multinode-901147
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-901147-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-901147-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-901147 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-901147 status --alsologtostderr: exit status 7 (509.964591ms)

                                                
                                                
-- stdout --
	multinode-901147
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-901147-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-901147-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0224 13:03:52.156190  685314 out.go:345] Setting OutFile to fd 1 ...
	I0224 13:03:52.156401  685314 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0224 13:03:52.156428  685314 out.go:358] Setting ErrFile to fd 2...
	I0224 13:03:52.156445  685314 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0224 13:03:52.156708  685314 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20451-568444/.minikube/bin
	I0224 13:03:52.156921  685314 out.go:352] Setting JSON to false
	I0224 13:03:52.156988  685314 mustload.go:65] Loading cluster: multinode-901147
	I0224 13:03:52.157064  685314 notify.go:220] Checking for updates...
	I0224 13:03:52.157996  685314 config.go:182] Loaded profile config "multinode-901147": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0224 13:03:52.158028  685314 status.go:174] checking status of multinode-901147 ...
	I0224 13:03:52.158620  685314 cli_runner.go:164] Run: docker container inspect multinode-901147 --format={{.State.Status}}
	I0224 13:03:52.179765  685314 status.go:371] multinode-901147 host status = "Running" (err=<nil>)
	I0224 13:03:52.179790  685314 host.go:66] Checking if "multinode-901147" exists ...
	I0224 13:03:52.180100  685314 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-901147
	I0224 13:03:52.206229  685314 host.go:66] Checking if "multinode-901147" exists ...
	I0224 13:03:52.206558  685314 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0224 13:03:52.206615  685314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-901147
	I0224 13:03:52.224062  685314 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33641 SSHKeyPath:/home/jenkins/minikube-integration/20451-568444/.minikube/machines/multinode-901147/id_rsa Username:docker}
	I0224 13:03:52.316523  685314 ssh_runner.go:195] Run: systemctl --version
	I0224 13:03:52.320606  685314 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0224 13:03:52.332074  685314 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0224 13:03:52.396538  685314 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-02-24 13:03:52.387516688 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.21.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.33.0]] Warnings:<nil>}}
	I0224 13:03:52.397135  685314 kubeconfig.go:125] found "multinode-901147" server: "https://192.168.67.2:8443"
	I0224 13:03:52.397172  685314 api_server.go:166] Checking apiserver status ...
	I0224 13:03:52.397218  685314 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 13:03:52.408433  685314 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1388/cgroup
	I0224 13:03:52.418100  685314 api_server.go:182] apiserver freezer: "13:freezer:/docker/b3b4852a1dc9a0429bca5fc13147c8d12b78b44e92427244991cf53f33edaddc/crio/crio-c8870910ef0b998ac289f4a0fa54dcca5f16ae8d7a134c5de18af92d002cad24"
	I0224 13:03:52.418166  685314 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/b3b4852a1dc9a0429bca5fc13147c8d12b78b44e92427244991cf53f33edaddc/crio/crio-c8870910ef0b998ac289f4a0fa54dcca5f16ae8d7a134c5de18af92d002cad24/freezer.state
	I0224 13:03:52.426531  685314 api_server.go:204] freezer state: "THAWED"
	I0224 13:03:52.426559  685314 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0224 13:03:52.434763  685314 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0224 13:03:52.434790  685314 status.go:463] multinode-901147 apiserver status = Running (err=<nil>)
	I0224 13:03:52.434812  685314 status.go:176] multinode-901147 status: &{Name:multinode-901147 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0224 13:03:52.434832  685314 status.go:174] checking status of multinode-901147-m02 ...
	I0224 13:03:52.435146  685314 cli_runner.go:164] Run: docker container inspect multinode-901147-m02 --format={{.State.Status}}
	I0224 13:03:52.451576  685314 status.go:371] multinode-901147-m02 host status = "Running" (err=<nil>)
	I0224 13:03:52.451602  685314 host.go:66] Checking if "multinode-901147-m02" exists ...
	I0224 13:03:52.451916  685314 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-901147-m02
	I0224 13:03:52.468203  685314 host.go:66] Checking if "multinode-901147-m02" exists ...
	I0224 13:03:52.468506  685314 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0224 13:03:52.468555  685314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-901147-m02
	I0224 13:03:52.486029  685314 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33646 SSHKeyPath:/home/jenkins/minikube-integration/20451-568444/.minikube/machines/multinode-901147-m02/id_rsa Username:docker}
	I0224 13:03:52.576016  685314 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0224 13:03:52.587019  685314 status.go:176] multinode-901147-m02 status: &{Name:multinode-901147-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0224 13:03:52.587052  685314 status.go:174] checking status of multinode-901147-m03 ...
	I0224 13:03:52.587414  685314 cli_runner.go:164] Run: docker container inspect multinode-901147-m03 --format={{.State.Status}}
	I0224 13:03:52.609163  685314 status.go:371] multinode-901147-m03 host status = "Stopped" (err=<nil>)
	I0224 13:03:52.609186  685314 status.go:384] host is not running, skipping remaining checks
	I0224 13:03:52.609192  685314 status.go:176] multinode-901147-m03 status: &{Name:multinode-901147-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.24s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-901147 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-901147 node start m03 -v=7 --alsologtostderr: (9.218238574s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-901147 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.01s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (83.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-901147
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-901147
E0224 13:04:19.623326  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/functional-307816/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-901147: (24.860283483s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-901147 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-901147 --wait=true -v=8 --alsologtostderr: (58.842705414s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-901147
--- PASS: TestMultiNode/serial/RestartKeepsNodes (83.84s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-901147 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-901147 node delete m03: (4.639598784s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-901147 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.30s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-901147 stop
E0224 13:05:42.689816  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/functional-307816/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-901147 stop: (23.65971666s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-901147 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-901147 status: exit status 7 (97.587576ms)

                                                
                                                
-- stdout --
	multinode-901147
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-901147-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-901147 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-901147 status --alsologtostderr: exit status 7 (95.173128ms)

                                                
                                                
-- stdout --
	multinode-901147
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-901147-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0224 13:05:55.563691  692891 out.go:345] Setting OutFile to fd 1 ...
	I0224 13:05:55.563916  692891 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0224 13:05:55.563944  692891 out.go:358] Setting ErrFile to fd 2...
	I0224 13:05:55.563964  692891 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0224 13:05:55.564258  692891 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20451-568444/.minikube/bin
	I0224 13:05:55.564484  692891 out.go:352] Setting JSON to false
	I0224 13:05:55.564544  692891 mustload.go:65] Loading cluster: multinode-901147
	I0224 13:05:55.564641  692891 notify.go:220] Checking for updates...
	I0224 13:05:55.565045  692891 config.go:182] Loaded profile config "multinode-901147": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0224 13:05:55.565088  692891 status.go:174] checking status of multinode-901147 ...
	I0224 13:05:55.565682  692891 cli_runner.go:164] Run: docker container inspect multinode-901147 --format={{.State.Status}}
	I0224 13:05:55.583358  692891 status.go:371] multinode-901147 host status = "Stopped" (err=<nil>)
	I0224 13:05:55.583384  692891 status.go:384] host is not running, skipping remaining checks
	I0224 13:05:55.583391  692891 status.go:176] multinode-901147 status: &{Name:multinode-901147 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0224 13:05:55.583423  692891 status.go:174] checking status of multinode-901147-m02 ...
	I0224 13:05:55.583744  692891 cli_runner.go:164] Run: docker container inspect multinode-901147-m02 --format={{.State.Status}}
	I0224 13:05:55.606258  692891 status.go:371] multinode-901147-m02 host status = "Stopped" (err=<nil>)
	I0224 13:05:55.606283  692891 status.go:384] host is not running, skipping remaining checks
	I0224 13:05:55.606290  692891 status.go:176] multinode-901147-m02 status: &{Name:multinode-901147-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.85s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (57.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-901147 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-901147 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (57.066944504s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-901147 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (57.73s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (32.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-901147
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-901147-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-901147-m02 --driver=docker  --container-runtime=crio: exit status 14 (95.178527ms)

                                                
                                                
-- stdout --
	* [multinode-901147-m02] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20451
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20451-568444/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20451-568444/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-901147-m02' is duplicated with machine name 'multinode-901147-m02' in profile 'multinode-901147'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-901147-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-901147-m03 --driver=docker  --container-runtime=crio: (30.179648815s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-901147
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-901147: exit status 80 (348.216724ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-901147 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-901147-m03 already exists in multinode-901147-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-901147-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-901147-m03: (1.961799905s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (32.64s)

                                                
                                    
x
+
TestPreload (125.63s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-039359 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E0224 13:07:35.535945  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/addons-961822/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-039359 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m32.63330619s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-039359 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-039359 image pull gcr.io/k8s-minikube/busybox: (3.37794459s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-039359
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-039359: (5.821639223s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-039359 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E0224 13:09:19.621993  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/functional-307816/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-039359 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (21.03778479s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-039359 image list
helpers_test.go:175: Cleaning up "test-preload-039359" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-039359
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-039359: (2.43529973s)
--- PASS: TestPreload (125.63s)

                                                
                                    
x
+
TestScheduledStopUnix (110.4s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-863807 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-863807 --memory=2048 --driver=docker  --container-runtime=crio: (34.101995341s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-863807 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-863807 -n scheduled-stop-863807
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-863807 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0224 13:10:10.385548  573823 retry.go:31] will retry after 111.727µs: open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/scheduled-stop-863807/pid: no such file or directory
I0224 13:10:10.385969  573823 retry.go:31] will retry after 115.771µs: open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/scheduled-stop-863807/pid: no such file or directory
I0224 13:10:10.387079  573823 retry.go:31] will retry after 297.066µs: open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/scheduled-stop-863807/pid: no such file or directory
I0224 13:10:10.388161  573823 retry.go:31] will retry after 399.868µs: open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/scheduled-stop-863807/pid: no such file or directory
I0224 13:10:10.389298  573823 retry.go:31] will retry after 703.419µs: open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/scheduled-stop-863807/pid: no such file or directory
I0224 13:10:10.392589  573823 retry.go:31] will retry after 736.537µs: open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/scheduled-stop-863807/pid: no such file or directory
I0224 13:10:10.393734  573823 retry.go:31] will retry after 1.374142ms: open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/scheduled-stop-863807/pid: no such file or directory
I0224 13:10:10.395882  573823 retry.go:31] will retry after 947.448µs: open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/scheduled-stop-863807/pid: no such file or directory
I0224 13:10:10.398704  573823 retry.go:31] will retry after 3.171399ms: open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/scheduled-stop-863807/pid: no such file or directory
I0224 13:10:10.402873  573823 retry.go:31] will retry after 4.96566ms: open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/scheduled-stop-863807/pid: no such file or directory
I0224 13:10:10.408111  573823 retry.go:31] will retry after 4.679604ms: open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/scheduled-stop-863807/pid: no such file or directory
I0224 13:10:10.413364  573823 retry.go:31] will retry after 12.692187ms: open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/scheduled-stop-863807/pid: no such file or directory
I0224 13:10:10.426709  573823 retry.go:31] will retry after 7.548191ms: open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/scheduled-stop-863807/pid: no such file or directory
I0224 13:10:10.434941  573823 retry.go:31] will retry after 20.039875ms: open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/scheduled-stop-863807/pid: no such file or directory
I0224 13:10:10.455147  573823 retry.go:31] will retry after 16.774107ms: open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/scheduled-stop-863807/pid: no such file or directory
I0224 13:10:10.472451  573823 retry.go:31] will retry after 25.323105ms: open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/scheduled-stop-863807/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-863807 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-863807 -n scheduled-stop-863807
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-863807
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-863807 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-863807
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-863807: exit status 7 (69.382846ms)

                                                
                                                
-- stdout --
	scheduled-stop-863807
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-863807 -n scheduled-stop-863807
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-863807 -n scheduled-stop-863807: exit status 7 (67.660793ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-863807" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-863807
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-863807: (4.764214097s)
--- PASS: TestScheduledStopUnix (110.40s)

                                                
                                    
x
+
TestInsufficientStorage (13.77s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-714833 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-714833 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (11.275561195s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"cf2aa5bd-6769-489e-8fd8-5b29cf0bbecc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-714833] minikube v1.35.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"37d33a9a-6a1a-4abc-b01a-79531c6b440d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20451"}}
	{"specversion":"1.0","id":"46e902a4-4160-4698-b931-530bb4e74238","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"418262d0-c75f-40ad-b113-2c44a14db0b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20451-568444/kubeconfig"}}
	{"specversion":"1.0","id":"014a16c7-cee6-41e1-8d1b-efbfbfd19629","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20451-568444/.minikube"}}
	{"specversion":"1.0","id":"333ed058-be88-4f70-ab99-0ed0e0683c49","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"3938f246-a408-40a7-bb1b-22dd049c0b83","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"7fd934c9-e059-4329-8bd1-b05d378796d1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"fc382ede-88c1-4ca4-9606-738ddb371619","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"2e993207-27bb-46d2-bac0-cf4e98b834b9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"b3eb067e-0ff9-4034-9618-162e9e4519c1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"d286d671-1343-422d-9b11-3d1a89cf58b6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-714833\" primary control-plane node in \"insufficient-storage-714833\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"1406677f-a1dd-4f3c-932a-07ac95bde3bf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.46-1740046583-20436 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"718f4ae4-8305-4e4d-a7c5-50844cf7705a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"ab50bbdc-f263-436a-a1fd-80b1da6f8896","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-714833 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-714833 --output=json --layout=cluster: exit status 7 (287.338046ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-714833","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-714833","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0224 13:11:37.720004  710492 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-714833" does not appear in /home/jenkins/minikube-integration/20451-568444/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-714833 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-714833 --output=json --layout=cluster: exit status 7 (295.555774ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-714833","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-714833","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0224 13:11:38.016936  710553 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-714833" does not appear in /home/jenkins/minikube-integration/20451-568444/kubeconfig
	E0224 13:11:38.027536  710553 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/insufficient-storage-714833/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-714833" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-714833
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-714833: (1.91331149s)
--- PASS: TestInsufficientStorage (13.77s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (68.07s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2431405387 start -p running-upgrade-974380 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2431405387 start -p running-upgrade-974380 --memory=2200 --vm-driver=docker  --container-runtime=crio: (38.359585352s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-974380 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-974380 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (25.926104317s)
helpers_test.go:175: Cleaning up "running-upgrade-974380" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-974380
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-974380: (2.95310576s)
--- PASS: TestRunningBinaryUpgrade (68.07s)

                                                
                                    
x
+
TestKubernetesUpgrade (390.77s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-812990 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-812990 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m11.729617251s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-812990
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-812990: (2.856688006s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-812990 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-812990 status --format={{.Host}}: exit status 7 (159.16811ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-812990 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-812990 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m39.045214321s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-812990 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-812990 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-812990 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (116.16156ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-812990] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20451
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20451-568444/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20451-568444/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-812990
	    minikube start -p kubernetes-upgrade-812990 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8129902 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.2, by running:
	    
	    minikube start -p kubernetes-upgrade-812990 --kubernetes-version=v1.32.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-812990 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-812990 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (34.283073891s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-812990" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-812990
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-812990: (2.443745801s)
--- PASS: TestKubernetesUpgrade (390.77s)

                                                
                                    
x
+
TestMissingContainerUpgrade (161.42s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.1651977698 start -p missing-upgrade-400278 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.1651977698 start -p missing-upgrade-400278 --memory=2200 --driver=docker  --container-runtime=crio: (1m27.691475529s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-400278
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-400278: (13.468490358s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-400278
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-400278 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-400278 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (57.080481029s)
helpers_test.go:175: Cleaning up "missing-upgrade-400278" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-400278
E0224 13:14:19.621334  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/functional-307816/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-400278: (2.388729317s)
--- PASS: TestMissingContainerUpgrade (161.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-670619 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-670619 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (126.832073ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-670619] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20451
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20451-568444/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20451-568444/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (37.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-670619 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-670619 --driver=docker  --container-runtime=crio: (37.301815981s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-670619 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (37.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (30.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-670619 --no-kubernetes --driver=docker  --container-runtime=crio
E0224 13:12:35.537841  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/addons-961822/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-670619 --no-kubernetes --driver=docker  --container-runtime=crio: (27.967863492s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-670619 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-670619 status -o json: exit status 2 (415.683372ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-670619","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-670619
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-670619: (2.040476792s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (30.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.79s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-670619 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-670619 --no-kubernetes --driver=docker  --container-runtime=crio: (9.786606282s)
--- PASS: TestNoKubernetes/serial/Start (9.79s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-670619 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-670619 "sudo systemctl is-active --quiet service kubelet": exit status 1 (397.893896ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (5.84s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-arm64 profile list: (5.325206185s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (5.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-670619
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-670619: (1.261436094s)
--- PASS: TestNoKubernetes/serial/Stop (1.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-670619 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-670619 --driver=docker  --container-runtime=crio: (7.269646986s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-670619 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-670619 "sudo systemctl is-active --quiet service kubelet": exit status 1 (259.261946ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.62s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.62s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (88.94s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2183079594 start -p stopped-upgrade-020786 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2183079594 start -p stopped-upgrade-020786 --memory=2200 --vm-driver=docker  --container-runtime=crio: (40.968839381s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2183079594 -p stopped-upgrade-020786 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2183079594 -p stopped-upgrade-020786 stop: (2.657315522s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-020786 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0224 13:15:38.605081  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/addons-961822/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-020786 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (45.310161192s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (88.94s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.36s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-020786
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-020786: (1.357204208s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.36s)

                                                
                                    
x
+
TestPause/serial/Start (51.78s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-766380 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E0224 13:17:35.535721  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/addons-961822/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-766380 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (51.775547259s)
--- PASS: TestPause/serial/Start (51.78s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (25.41s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-766380 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-766380 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (25.393711649s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (25.41s)

                                                
                                    
x
+
TestPause/serial/Pause (0.78s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-766380 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.78s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.31s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-766380 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-766380 --output=json --layout=cluster: exit status 2 (310.183011ms)

                                                
                                                
-- stdout --
	{"Name":"pause-766380","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-766380","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.31s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.79s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-766380 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.79s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.11s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-766380 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-766380 --alsologtostderr -v=5: (1.106100413s)
--- PASS: TestPause/serial/PauseAgain (1.11s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.79s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-766380 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-766380 --alsologtostderr -v=5: (2.792764628s)
--- PASS: TestPause/serial/DeletePaused (2.79s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.45s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-766380
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-766380: exit status 1 (17.552537ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-766380: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-946657 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-946657 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (289.754776ms)

                                                
                                                
-- stdout --
	* [false-946657] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20451
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20451-568444/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20451-568444/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0224 13:19:12.644142  749598 out.go:345] Setting OutFile to fd 1 ...
	I0224 13:19:12.644377  749598 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0224 13:19:12.644401  749598 out.go:358] Setting ErrFile to fd 2...
	I0224 13:19:12.644419  749598 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0224 13:19:12.644694  749598 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20451-568444/.minikube/bin
	I0224 13:19:12.645195  749598 out.go:352] Setting JSON to false
	I0224 13:19:12.646290  749598 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":14501,"bootTime":1740388652,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1077-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0224 13:19:12.646412  749598 start.go:139] virtualization:  
	I0224 13:19:12.650328  749598 out.go:177] * [false-946657] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0224 13:19:12.653526  749598 out.go:177]   - MINIKUBE_LOCATION=20451
	I0224 13:19:12.653604  749598 notify.go:220] Checking for updates...
	I0224 13:19:12.659433  749598 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0224 13:19:12.662384  749598 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20451-568444/kubeconfig
	I0224 13:19:12.665277  749598 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20451-568444/.minikube
	I0224 13:19:12.668189  749598 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0224 13:19:12.671011  749598 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0224 13:19:12.674406  749598 config.go:182] Loaded profile config "kubernetes-upgrade-812990": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0224 13:19:12.674515  749598 driver.go:394] Setting default libvirt URI to qemu:///system
	I0224 13:19:12.735587  749598 docker.go:123] docker version: linux-28.0.0:Docker Engine - Community
	I0224 13:19:12.735699  749598 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0224 13:19:12.835438  749598 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-02-24 13:19:12.822919156 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.21.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.33.0]] Warnings:<nil>}}
	I0224 13:19:12.835543  749598 docker.go:318] overlay module found
	I0224 13:19:12.839174  749598 out.go:177] * Using the docker driver based on user configuration
	I0224 13:19:12.843195  749598 start.go:297] selected driver: docker
	I0224 13:19:12.843216  749598 start.go:901] validating driver "docker" against <nil>
	I0224 13:19:12.843282  749598 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0224 13:19:12.847103  749598 out.go:201] 
	W0224 13:19:12.850671  749598 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0224 13:19:12.855233  749598 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-946657 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-946657

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-946657

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-946657

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-946657

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-946657

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-946657

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-946657

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-946657

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-946657

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-946657

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-946657"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-946657"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-946657"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-946657

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-946657"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-946657"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-946657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-946657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-946657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-946657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-946657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-946657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-946657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-946657" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-946657"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-946657"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-946657"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-946657"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-946657"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-946657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-946657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-946657" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-946657"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-946657"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-946657"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-946657"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-946657"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20451-568444/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 24 Feb 2025 13:19:15 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-812990
contexts:
- context:
cluster: kubernetes-upgrade-812990
extensions:
- extension:
last-update: Mon, 24 Feb 2025 13:19:15 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: kubernetes-upgrade-812990
name: kubernetes-upgrade-812990
current-context: kubernetes-upgrade-812990
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-812990
user:
client-certificate: /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/kubernetes-upgrade-812990/client.crt
client-key: /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/kubernetes-upgrade-812990/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-946657

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-946657"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-946657"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-946657"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-946657"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-946657"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-946657"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-946657"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-946657"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-946657"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-946657"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-946657"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-946657"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-946657"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-946657"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-946657"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-946657"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-946657"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-946657"

                                                
                                                
----------------------- debugLogs end: false-946657 [took: 4.754329243s] --------------------------------
helpers_test.go:175: Cleaning up "false-946657" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-946657
--- PASS: TestNetworkPlugins/group/false (5.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (154.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-374993 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
E0224 13:22:22.692471  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/functional-307816/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:22:35.535291  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/addons-961822/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-374993 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m34.511186891s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (154.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.59s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-374993 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [74f08014-f2e7-4269-b316-d38466755451] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [74f08014-f2e7-4269-b316-d38466755451] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003718406s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-374993 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-374993 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-374993 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-374993 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-374993 --alsologtostderr -v=3: (12.018567055s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-374993 -n old-k8s-version-374993
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-374993 -n old-k8s-version-374993: exit status 7 (96.870202ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-374993 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (140.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-374993 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-374993 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m20.076014982s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-374993 -n old-k8s-version-374993
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (140.51s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (71.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-592106 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2
E0224 13:24:19.621750  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/functional-307816/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-592106 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2: (1m11.880669984s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (71.88s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-592106 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [fb0aaa72-178d-4411-85b6-c6fd68a47a62] Pending
helpers_test.go:344: "busybox" [fb0aaa72-178d-4411-85b6-c6fd68a47a62] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [fb0aaa72-178d-4411-85b6-c6fd68a47a62] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004352592s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-592106 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-592106 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-592106 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.078743466s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-592106 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.98s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-592106 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-592106 --alsologtostderr -v=3: (11.983393766s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-592106 -n no-preload-592106
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-592106 -n no-preload-592106: exit status 7 (76.455394ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-592106 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (300.96s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-592106 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-592106 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2: (5m0.584062346s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-592106 -n no-preload-592106
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (300.96s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-r85zh" [d6eaa1fb-691c-4ff1-9992-9fc8f1122176] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004667131s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-r85zh" [d6eaa1fb-691c-4ff1-9992-9fc8f1122176] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003483295s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-374993 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-374993 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250214-acbabc1a
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-374993 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-374993 -n old-k8s-version-374993
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-374993 -n old-k8s-version-374993: exit status 2 (331.973505ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-374993 -n old-k8s-version-374993
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-374993 -n old-k8s-version-374993: exit status 2 (332.783018ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-374993 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-374993 -n old-k8s-version-374993
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-374993 -n old-k8s-version-374993
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (51s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-706350 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-706350 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2: (50.997636708s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (51.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-706350 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [16b43166-acac-4369-b7ea-273ebd655d95] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [16b43166-acac-4369-b7ea-273ebd655d95] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.002671155s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-706350 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-706350 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-706350 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-706350 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-706350 --alsologtostderr -v=3: (11.981940814s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.98s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-706350 -n embed-certs-706350
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-706350 -n embed-certs-706350: exit status 7 (78.885804ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-706350 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (294.56s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-706350 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2
E0224 13:27:35.535378  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/addons-961822/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:28:13.912861  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/old-k8s-version-374993/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:28:13.919319  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/old-k8s-version-374993/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:28:13.930773  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/old-k8s-version-374993/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:28:13.952154  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/old-k8s-version-374993/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:28:13.993617  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/old-k8s-version-374993/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:28:14.075131  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/old-k8s-version-374993/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:28:14.236757  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/old-k8s-version-374993/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:28:14.558809  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/old-k8s-version-374993/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:28:15.200803  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/old-k8s-version-374993/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:28:16.482694  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/old-k8s-version-374993/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:28:19.044323  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/old-k8s-version-374993/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:28:24.165939  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/old-k8s-version-374993/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:28:34.407467  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/old-k8s-version-374993/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:28:54.888922  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/old-k8s-version-374993/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:29:19.621889  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/functional-307816/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:29:35.851061  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/old-k8s-version-374993/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-706350 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2: (4m54.095864712s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-706350 -n embed-certs-706350
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (294.56s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-hq57s" [bd5a89f1-efa6-418e-922a-91b21f28e46a] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004094439s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-hq57s" [bd5a89f1-efa6-418e-922a-91b21f28e46a] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007908257s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-592106 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-592106 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250214-acbabc1a
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-592106 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-592106 -n no-preload-592106
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-592106 -n no-preload-592106: exit status 2 (328.486973ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-592106 -n no-preload-592106
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-592106 -n no-preload-592106: exit status 2 (318.517258ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-592106 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-592106 -n no-preload-592106
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-592106 -n no-preload-592106
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (52.5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-847398 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2
E0224 13:30:57.773001  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/old-k8s-version-374993/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-847398 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2: (52.502481757s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (52.50s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-847398 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [77e281dc-87fe-45cf-bb2c-b57a3ffd7d5b] Pending
helpers_test.go:344: "busybox" [77e281dc-87fe-45cf-bb2c-b57a3ffd7d5b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [77e281dc-87fe-45cf-bb2c-b57a3ffd7d5b] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.007368231s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-847398 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-847398 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-847398 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.29817026s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-847398 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-847398 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-847398 --alsologtostderr -v=3: (12.088012129s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-847398 -n default-k8s-diff-port-847398
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-847398 -n default-k8s-diff-port-847398: exit status 7 (81.851973ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-847398 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (276.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-847398 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2
E0224 13:32:18.606630  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/addons-961822/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-847398 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2: (4m35.716514841s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-847398 -n default-k8s-diff-port-847398
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (276.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-jsxp7" [244cd22e-a7a9-41ea-baba-57d816a3d3a0] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003999688s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-jsxp7" [244cd22e-a7a9-41ea-baba-57d816a3d3a0] Running
E0224 13:32:35.535706  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/addons-961822/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003740748s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-706350 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-706350 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250214-acbabc1a
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-706350 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-706350 -n embed-certs-706350
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-706350 -n embed-certs-706350: exit status 2 (315.725649ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-706350 -n embed-certs-706350
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-706350 -n embed-certs-706350: exit status 2 (331.852018ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-706350 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-706350 -n embed-certs-706350
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-706350 -n embed-certs-706350
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.15s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (36.96s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-245002 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2
E0224 13:33:13.912982  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/old-k8s-version-374993/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-245002 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2: (36.958791578s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (36.96s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-245002 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-245002 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.337731899s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-245002 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-245002 --alsologtostderr -v=3: (1.254089288s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-245002 -n newest-cni-245002
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-245002 -n newest-cni-245002: exit status 7 (86.697376ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-245002 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (15.57s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-245002 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-245002 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2: (15.149123247s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-245002 -n newest-cni-245002
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (15.57s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-245002 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250214-acbabc1a
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-245002 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-245002 -n newest-cni-245002
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-245002 -n newest-cni-245002: exit status 2 (324.418277ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-245002 -n newest-cni-245002
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-245002 -n newest-cni-245002: exit status 2 (338.34965ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-245002 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-245002 -n newest-cni-245002
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-245002 -n newest-cni-245002
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (46.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-946657 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E0224 13:34:19.621742  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/functional-307816/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-946657 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (46.197069228s)
--- PASS: TestNetworkPlugins/group/auto/Start (46.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-946657 "pgrep -a kubelet"
I0224 13:34:29.955620  573823 config.go:182] Loaded profile config "auto-946657": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-946657 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-m9rt4" [56a4349d-6c17-4626-ad96-dd1121e8976e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-m9rt4" [56a4349d-6c17-4626-ad96-dd1121e8976e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004425497s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-946657 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-946657 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-946657 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (47.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-946657 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E0224 13:35:03.474432  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/no-preload-592106/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:35:03.480768  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/no-preload-592106/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:35:03.492082  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/no-preload-592106/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:35:03.513425  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/no-preload-592106/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:35:03.554813  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/no-preload-592106/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:35:03.636207  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/no-preload-592106/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:35:03.797652  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/no-preload-592106/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:35:04.119433  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/no-preload-592106/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:35:04.760963  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/no-preload-592106/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:35:06.042238  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/no-preload-592106/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:35:08.604059  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/no-preload-592106/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:35:13.725911  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/no-preload-592106/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:35:23.967667  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/no-preload-592106/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:35:44.449026  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/no-preload-592106/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-946657 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (47.751443128s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (47.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-p9rm9" [8f8fc4ed-1c86-4e13-8cb0-f8c2745c41cd] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003042621s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-946657 "pgrep -a kubelet"
I0224 13:35:55.567552  573823 config.go:182] Loaded profile config "kindnet-946657": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-946657 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-9nt74" [b89228d6-712b-4935-aeee-5571d55b2b9d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-9nt74" [b89228d6-712b-4935-aeee-5571d55b2b9d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.003678447s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-946657 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-946657 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-946657 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (78.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-946657 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-946657 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m18.927075315s)
--- PASS: TestNetworkPlugins/group/calico/Start (78.93s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-jrzkr" [41bb2544-b3b0-412f-aa22-c9726af37c1f] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004068888s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-jrzkr" [41bb2544-b3b0-412f-aa22-c9726af37c1f] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006185243s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-847398 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-847398 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250214-acbabc1a
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-847398 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-847398 --alsologtostderr -v=1: (1.120801036s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-847398 -n default-k8s-diff-port-847398
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-847398 -n default-k8s-diff-port-847398: exit status 2 (427.030546ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-847398 -n default-k8s-diff-port-847398
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-847398 -n default-k8s-diff-port-847398: exit status 2 (432.641481ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-847398 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-847398 -n default-k8s-diff-port-847398
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-847398 -n default-k8s-diff-port-847398
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.07s)
E0224 13:40:31.175386  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/no-preload-592106/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:40:49.282595  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/kindnet-946657/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:40:49.289666  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/kindnet-946657/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:40:49.301615  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/kindnet-946657/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:40:49.323638  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/kindnet-946657/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:40:49.365005  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/kindnet-946657/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:40:49.446363  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/kindnet-946657/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:40:49.607914  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/kindnet-946657/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:40:49.930030  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/kindnet-946657/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:40:50.571851  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/kindnet-946657/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:40:51.853275  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/kindnet-946657/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:40:52.204051  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/auto-946657/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:40:54.415493  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/kindnet-946657/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:40:59.537232  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/kindnet-946657/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (60.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-946657 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E0224 13:37:35.536226  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/addons-961822/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-946657 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m0.497683617s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (60.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-g5jpr" [0b3416c7-7a4a-4ed8-a3e0-1ed38f33a119] Running
E0224 13:37:47.333070  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/no-preload-592106/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003036603s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-946657 "pgrep -a kubelet"
I0224 13:37:53.472958  573823 config.go:182] Loaded profile config "calico-946657": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-946657 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-dbz5s" [37624215-8de4-4d03-b863-5e592fc52cda] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-dbz5s" [37624215-8de4-4d03-b863-5e592fc52cda] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.004059724s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-946657 "pgrep -a kubelet"
I0224 13:37:58.739436  573823 config.go:182] Loaded profile config "custom-flannel-946657": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-946657 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-2psfp" [3830d296-c106-4e98-8552-b84c2d68e7d6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-2psfp" [3830d296-c106-4e98-8552-b84c2d68e7d6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.004286552s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-946657 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-946657 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-946657 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-946657 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-946657 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-946657 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (79.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-946657 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-946657 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m19.134253935s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (79.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (57.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-946657 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E0224 13:39:02.695370  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/functional-307816/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:39:19.621664  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/functional-307816/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:39:30.263566  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/auto-946657/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:39:30.270048  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/auto-946657/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:39:30.281568  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/auto-946657/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:39:30.303019  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/auto-946657/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:39:30.344462  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/auto-946657/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:39:30.426022  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/auto-946657/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:39:30.588362  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/auto-946657/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:39:30.909765  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/auto-946657/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:39:31.551450  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/auto-946657/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:39:32.832803  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/auto-946657/client.crt: no such file or directory" logger="UnhandledError"
E0224 13:39:35.395212  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/auto-946657/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-946657 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (57.928121777s)
--- PASS: TestNetworkPlugins/group/flannel/Start (57.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-q66cg" [0523504f-aecb-4410-b30d-bbc226e08096] Running
E0224 13:39:40.517039  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/auto-946657/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00471137s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-946657 "pgrep -a kubelet"
I0224 13:39:42.050242  573823 config.go:182] Loaded profile config "flannel-946657": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-946657 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-92m6x" [4dcd1be0-8c1d-41e9-a926-e7af3cbc62c1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-92m6x" [4dcd1be0-8c1d-41e9-a926-e7af3cbc62c1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.002978963s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-946657 "pgrep -a kubelet"
I0224 13:39:48.764804  573823 config.go:182] Loaded profile config "enable-default-cni-946657": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-946657 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-2gdc7" [e7c0b5b5-7094-40f1-909a-c8e1e25d72fc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0224 13:39:50.758326  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/auto-946657/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-2gdc7" [e7c0b5b5-7094-40f1-909a-c8e1e25d72fc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.003996524s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-946657 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-946657 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-946657 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-946657 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-946657 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-946657 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (42.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-946657 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-946657 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (42.752931447s)
--- PASS: TestNetworkPlugins/group/bridge/Start (42.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-946657 "pgrep -a kubelet"
I0224 13:41:02.689486  573823 config.go:182] Loaded profile config "bridge-946657": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-946657 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-zw8zm" [08cbf905-e10d-40f7-83ad-0c49dc7f937e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-zw8zm" [08cbf905-e10d-40f7-83ad-0c49dc7f937e] Running
E0224 13:41:09.778770  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/kindnet-946657/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003890919s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-946657 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-946657 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-946657 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    

Test skip (32/331)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.32.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.59s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-157710 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-157710" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-157710
--- SKIP: TestDownloadOnlyKic (0.59s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.38s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:789: skipping: crio not supported
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-961822 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.38s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1804: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:480: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:567: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:84: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-006810" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-006810
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-946657 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-946657

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-946657

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-946657

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-946657

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-946657

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-946657

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-946657

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-946657

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-946657

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-946657

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-946657"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-946657"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-946657"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-946657

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-946657"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-946657"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-946657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-946657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-946657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-946657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-946657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-946657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-946657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-946657" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-946657"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-946657"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-946657"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-946657"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-946657"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-946657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-946657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-946657" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-946657"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-946657"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-946657"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-946657"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-946657"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20451-568444/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 24 Feb 2025 13:19:07 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-812990
contexts:
- context:
cluster: kubernetes-upgrade-812990
extensions:
- extension:
last-update: Mon, 24 Feb 2025 13:19:07 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: kubernetes-upgrade-812990
name: kubernetes-upgrade-812990
current-context: kubernetes-upgrade-812990
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-812990
user:
client-certificate: /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/kubernetes-upgrade-812990/client.crt
client-key: /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/kubernetes-upgrade-812990/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-946657

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-946657"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-946657"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-946657"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-946657"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-946657"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-946657"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-946657"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-946657"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-946657"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-946657"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-946657"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-946657"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-946657"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-946657"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-946657"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-946657"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-946657"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-946657"

                                                
                                                
----------------------- debugLogs end: kubenet-946657 [took: 5.395312813s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-946657" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-946657
--- SKIP: TestNetworkPlugins/group/kubenet (5.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
E0224 13:19:19.621875  573823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/functional-307816/client.crt: no such file or directory" logger="UnhandledError"
panic.go:629: 
----------------------- debugLogs start: cilium-946657 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-946657

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-946657

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-946657

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-946657

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-946657

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-946657

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-946657

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-946657

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-946657

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-946657

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-946657"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-946657"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-946657"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-946657

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-946657"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-946657"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-946657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-946657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-946657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-946657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-946657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-946657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-946657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-946657" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-946657"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-946657"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-946657"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-946657"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-946657"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-946657

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-946657

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-946657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-946657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-946657

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-946657

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-946657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-946657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-946657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-946657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-946657" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-946657"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-946657"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-946657"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-946657"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-946657"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20451-568444/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 24 Feb 2025 13:19:15 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-812990
contexts:
- context:
cluster: kubernetes-upgrade-812990
extensions:
- extension:
last-update: Mon, 24 Feb 2025 13:19:15 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: kubernetes-upgrade-812990
name: kubernetes-upgrade-812990
current-context: kubernetes-upgrade-812990
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-812990
user:
client-certificate: /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/kubernetes-upgrade-812990/client.crt
client-key: /home/jenkins/minikube-integration/20451-568444/.minikube/profiles/kubernetes-upgrade-812990/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-946657

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-946657"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-946657"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-946657"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-946657"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-946657"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-946657"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-946657"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-946657"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-946657"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-946657"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-946657"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-946657"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-946657"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-946657"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-946657"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-946657"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-946657"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-946657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-946657"

                                                
                                                
----------------------- debugLogs end: cilium-946657 [took: 3.988575569s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-946657" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-946657
--- SKIP: TestNetworkPlugins/group/cilium (4.15s)

                                                
                                    
Copied to clipboard