Test Report: Docker_Linux_crio_arm64 20544

                    
                      e7fdd0f84498ee11d10e9add99d5b469e36cb1c9:2025-03-19:38785
                    
                

Test fail (2/331)

Order failed test Duration
36 TestAddons/parallel/Ingress 154.14
346 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 2.88
x
+
TestAddons/parallel/Ingress (154.14s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-039972 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-039972 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-039972 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [c0da2aa2-5d4b-4655-bac9-596d6bf5a4bd] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [c0da2aa2-5d4b-4655-bac9-596d6bf5a4bd] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.003138948s
I0319 18:30:27.626619  453411 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-039972 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-039972 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.702364926s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-039972 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-039972 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-039972
helpers_test.go:235: (dbg) docker inspect addons-039972:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fb7c8782a39ae80b81aae63592478f5dedf64112370bdf789812bdc35838b44e",
	        "Created": "2025-03-19T18:25:52.904322746Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 454571,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-03-19T18:25:52.973420425Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:df0c2544fb3106b890f0a9ab81fcf49f97edb092b83e47f42288ad5dfe1f4b40",
	        "ResolvConfPath": "/var/lib/docker/containers/fb7c8782a39ae80b81aae63592478f5dedf64112370bdf789812bdc35838b44e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fb7c8782a39ae80b81aae63592478f5dedf64112370bdf789812bdc35838b44e/hostname",
	        "HostsPath": "/var/lib/docker/containers/fb7c8782a39ae80b81aae63592478f5dedf64112370bdf789812bdc35838b44e/hosts",
	        "LogPath": "/var/lib/docker/containers/fb7c8782a39ae80b81aae63592478f5dedf64112370bdf789812bdc35838b44e/fb7c8782a39ae80b81aae63592478f5dedf64112370bdf789812bdc35838b44e-json.log",
	        "Name": "/addons-039972",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-039972:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-039972",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fb7c8782a39ae80b81aae63592478f5dedf64112370bdf789812bdc35838b44e",
	                "LowerDir": "/var/lib/docker/overlay2/1860eb1d3c379e9327cfb160f608ec914a51d857e91dd240c36a9c4353902ecd-init/diff:/var/lib/docker/overlay2/55bf5981cfa2c5a324266a998a6b44d59c28d371542dcf93ef413ea591419fb7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1860eb1d3c379e9327cfb160f608ec914a51d857e91dd240c36a9c4353902ecd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1860eb1d3c379e9327cfb160f608ec914a51d857e91dd240c36a9c4353902ecd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1860eb1d3c379e9327cfb160f608ec914a51d857e91dd240c36a9c4353902ecd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-039972",
	                "Source": "/var/lib/docker/volumes/addons-039972/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-039972",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-039972",
	                "name.minikube.sigs.k8s.io": "addons-039972",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c3058a7aede4deb05d94c34afac2aa6cc574f5880fa85aa4ba9ff3fa03c7a777",
	            "SandboxKey": "/var/run/docker/netns/c3058a7aede4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33163"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33164"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33167"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33165"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33166"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-039972": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5e:c4:ea:eb:3d:7e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ee1c11edea34eb5d025ba0d523496939bad82d7ed228f7f8d8fc77cbfd271422",
	                    "EndpointID": "fb08773840e232b14149c6b05421cbb93e1a27861a1eda61c7a4de6ab488bb53",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-039972",
	                        "fb7c8782a39a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-039972 -n addons-039972
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-039972 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-039972 logs -n 25: (1.648009957s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-818659                                                                     | download-only-818659   | jenkins | v1.35.0 | 19 Mar 25 18:25 UTC | 19 Mar 25 18:25 UTC |
	| start   | --download-only -p                                                                          | download-docker-027384 | jenkins | v1.35.0 | 19 Mar 25 18:25 UTC |                     |
	|         | download-docker-027384                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-027384                                                                   | download-docker-027384 | jenkins | v1.35.0 | 19 Mar 25 18:25 UTC | 19 Mar 25 18:25 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-745400   | jenkins | v1.35.0 | 19 Mar 25 18:25 UTC |                     |
	|         | binary-mirror-745400                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:34039                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-745400                                                                     | binary-mirror-745400   | jenkins | v1.35.0 | 19 Mar 25 18:25 UTC | 19 Mar 25 18:25 UTC |
	| addons  | enable dashboard -p                                                                         | addons-039972          | jenkins | v1.35.0 | 19 Mar 25 18:25 UTC |                     |
	|         | addons-039972                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-039972          | jenkins | v1.35.0 | 19 Mar 25 18:25 UTC |                     |
	|         | addons-039972                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-039972 --wait=true                                                                | addons-039972          | jenkins | v1.35.0 | 19 Mar 25 18:25 UTC | 19 Mar 25 18:28 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	| addons  | addons-039972 addons disable                                                                | addons-039972          | jenkins | v1.35.0 | 19 Mar 25 18:28 UTC | 19 Mar 25 18:28 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-039972 addons disable                                                                | addons-039972          | jenkins | v1.35.0 | 19 Mar 25 18:28 UTC | 19 Mar 25 18:28 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-039972          | jenkins | v1.35.0 | 19 Mar 25 18:28 UTC | 19 Mar 25 18:28 UTC |
	|         | -p addons-039972                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-039972 addons disable                                                                | addons-039972          | jenkins | v1.35.0 | 19 Mar 25 18:28 UTC | 19 Mar 25 18:29 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ip      | addons-039972 ip                                                                            | addons-039972          | jenkins | v1.35.0 | 19 Mar 25 18:29 UTC | 19 Mar 25 18:29 UTC |
	| addons  | addons-039972 addons disable                                                                | addons-039972          | jenkins | v1.35.0 | 19 Mar 25 18:29 UTC | 19 Mar 25 18:29 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-039972 addons disable                                                                | addons-039972          | jenkins | v1.35.0 | 19 Mar 25 18:29 UTC | 19 Mar 25 18:29 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | addons-039972 addons                                                                        | addons-039972          | jenkins | v1.35.0 | 19 Mar 25 18:29 UTC | 19 Mar 25 18:29 UTC |
	|         | disable nvidia-device-plugin                                                                |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-039972 ssh cat                                                                       | addons-039972          | jenkins | v1.35.0 | 19 Mar 25 18:29 UTC | 19 Mar 25 18:29 UTC |
	|         | /opt/local-path-provisioner/pvc-e1416b69-1a54-4b11-ad23-fffdc53b9f83_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-039972 addons disable                                                                | addons-039972          | jenkins | v1.35.0 | 19 Mar 25 18:29 UTC | 19 Mar 25 18:30 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-039972 addons                                                                        | addons-039972          | jenkins | v1.35.0 | 19 Mar 25 18:29 UTC | 19 Mar 25 18:29 UTC |
	|         | disable cloud-spanner                                                                       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-039972 addons                                                                        | addons-039972          | jenkins | v1.35.0 | 19 Mar 25 18:29 UTC | 19 Mar 25 18:29 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-039972 addons                                                                        | addons-039972          | jenkins | v1.35.0 | 19 Mar 25 18:30 UTC | 19 Mar 25 18:30 UTC |
	|         | disable inspektor-gadget                                                                    |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-039972 addons                                                                        | addons-039972          | jenkins | v1.35.0 | 19 Mar 25 18:30 UTC | 19 Mar 25 18:30 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-039972 addons                                                                        | addons-039972          | jenkins | v1.35.0 | 19 Mar 25 18:30 UTC | 19 Mar 25 18:30 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-039972 ssh curl -s                                                                   | addons-039972          | jenkins | v1.35.0 | 19 Mar 25 18:30 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-039972 ip                                                                            | addons-039972          | jenkins | v1.35.0 | 19 Mar 25 18:32 UTC | 19 Mar 25 18:32 UTC |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/03/19 18:25:27
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0319 18:25:27.327320  454179 out.go:345] Setting OutFile to fd 1 ...
	I0319 18:25:27.327506  454179 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0319 18:25:27.327535  454179 out.go:358] Setting ErrFile to fd 2...
	I0319 18:25:27.327554  454179 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0319 18:25:27.327926  454179 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20544-448023/.minikube/bin
	I0319 18:25:27.328556  454179 out.go:352] Setting JSON to false
	I0319 18:25:27.329572  454179 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7661,"bootTime":1742401066,"procs":163,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1077-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0319 18:25:27.329710  454179 start.go:139] virtualization:  
	I0319 18:25:27.333038  454179 out.go:177] * [addons-039972] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0319 18:25:27.336811  454179 out.go:177]   - MINIKUBE_LOCATION=20544
	I0319 18:25:27.336943  454179 notify.go:220] Checking for updates...
	I0319 18:25:27.342343  454179 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0319 18:25:27.345253  454179 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20544-448023/kubeconfig
	I0319 18:25:27.348183  454179 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20544-448023/.minikube
	I0319 18:25:27.351040  454179 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0319 18:25:27.354049  454179 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0319 18:25:27.357308  454179 driver.go:394] Setting default libvirt URI to qemu:///system
	I0319 18:25:27.383664  454179 docker.go:123] docker version: linux-28.0.2:Docker Engine - Community
	I0319 18:25:27.383806  454179 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0319 18:25:27.452078  454179 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2025-03-19 18:25:27.442620606 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
	I0319 18:25:27.452189  454179 docker.go:318] overlay module found
	I0319 18:25:27.456695  454179 out.go:177] * Using the docker driver based on user configuration
	I0319 18:25:27.459598  454179 start.go:297] selected driver: docker
	I0319 18:25:27.459621  454179 start.go:901] validating driver "docker" against <nil>
	I0319 18:25:27.459635  454179 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0319 18:25:27.460329  454179 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0319 18:25:27.518782  454179 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2025-03-19 18:25:27.510026438 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
	I0319 18:25:27.518938  454179 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0319 18:25:27.519179  454179 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0319 18:25:27.522092  454179 out.go:177] * Using Docker driver with root privileges
	I0319 18:25:27.525023  454179 cni.go:84] Creating CNI manager for ""
	I0319 18:25:27.525095  454179 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0319 18:25:27.525108  454179 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0319 18:25:27.525197  454179 start.go:340] cluster config:
	{Name:addons-039972 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-039972 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 18:25:27.528385  454179 out.go:177] * Starting "addons-039972" primary control-plane node in "addons-039972" cluster
	I0319 18:25:27.531131  454179 cache.go:121] Beginning downloading kic base image for docker with crio
	I0319 18:25:27.534116  454179 out.go:177] * Pulling base image v0.0.46-1741860993-20523 ...
	I0319 18:25:27.536967  454179 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0319 18:25:27.536997  454179 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon
	I0319 18:25:27.537016  454179 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20544-448023/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-arm64.tar.lz4
	I0319 18:25:27.537024  454179 cache.go:56] Caching tarball of preloaded images
	I0319 18:25:27.537121  454179 preload.go:172] Found /home/jenkins/minikube-integration/20544-448023/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0319 18:25:27.537131  454179 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0319 18:25:27.537473  454179 profile.go:143] Saving config to /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/addons-039972/config.json ...
	I0319 18:25:27.537503  454179 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/addons-039972/config.json: {Name:mkb2d6e9a167efb9872502f895eb00592511967f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 18:25:27.552988  454179 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 to local cache
	I0319 18:25:27.553138  454179 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local cache directory
	I0319 18:25:27.553157  454179 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local cache directory, skipping pull
	I0319 18:25:27.553162  454179 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 exists in cache, skipping pull
	I0319 18:25:27.553170  454179 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 as a tarball
	I0319 18:25:27.553175  454179 cache.go:163] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 from local cache
	I0319 18:25:45.045080  454179 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 from cached tarball
	I0319 18:25:45.045123  454179 cache.go:230] Successfully downloaded all kic artifacts
	I0319 18:25:45.045180  454179 start.go:360] acquireMachinesLock for addons-039972: {Name:mkf488b7efefaa389e168cf273986de61874d10e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0319 18:25:45.046905  454179 start.go:364] duration metric: took 1.679333ms to acquireMachinesLock for "addons-039972"
	I0319 18:25:45.046987  454179 start.go:93] Provisioning new machine with config: &{Name:addons-039972 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-039972 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0319 18:25:45.047109  454179 start.go:125] createHost starting for "" (driver="docker")
	I0319 18:25:45.051542  454179 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0319 18:25:45.051829  454179 start.go:159] libmachine.API.Create for "addons-039972" (driver="docker")
	I0319 18:25:45.051876  454179 client.go:168] LocalClient.Create starting
	I0319 18:25:45.052030  454179 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20544-448023/.minikube/certs/ca.pem
	I0319 18:25:45.832083  454179 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20544-448023/.minikube/certs/cert.pem
	I0319 18:25:46.203848  454179 cli_runner.go:164] Run: docker network inspect addons-039972 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0319 18:25:46.220861  454179 cli_runner.go:211] docker network inspect addons-039972 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0319 18:25:46.220944  454179 network_create.go:284] running [docker network inspect addons-039972] to gather additional debugging logs...
	I0319 18:25:46.220981  454179 cli_runner.go:164] Run: docker network inspect addons-039972
	W0319 18:25:46.237226  454179 cli_runner.go:211] docker network inspect addons-039972 returned with exit code 1
	I0319 18:25:46.237257  454179 network_create.go:287] error running [docker network inspect addons-039972]: docker network inspect addons-039972: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-039972 not found
	I0319 18:25:46.237278  454179 network_create.go:289] output of [docker network inspect addons-039972]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-039972 not found
	
	** /stderr **
	I0319 18:25:46.237377  454179 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0319 18:25:46.253954  454179 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001961520}
	I0319 18:25:46.253994  454179 network_create.go:124] attempt to create docker network addons-039972 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0319 18:25:46.254058  454179 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-039972 addons-039972
	I0319 18:25:46.318013  454179 network_create.go:108] docker network addons-039972 192.168.49.0/24 created
	I0319 18:25:46.318046  454179 kic.go:121] calculated static IP "192.168.49.2" for the "addons-039972" container
	I0319 18:25:46.318129  454179 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0319 18:25:46.333153  454179 cli_runner.go:164] Run: docker volume create addons-039972 --label name.minikube.sigs.k8s.io=addons-039972 --label created_by.minikube.sigs.k8s.io=true
	I0319 18:25:46.351577  454179 oci.go:103] Successfully created a docker volume addons-039972
	I0319 18:25:46.351661  454179 cli_runner.go:164] Run: docker run --rm --name addons-039972-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-039972 --entrypoint /usr/bin/test -v addons-039972:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -d /var/lib
	I0319 18:25:48.510899  454179 cli_runner.go:217] Completed: docker run --rm --name addons-039972-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-039972 --entrypoint /usr/bin/test -v addons-039972:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -d /var/lib: (2.159198303s)
	I0319 18:25:48.510930  454179 oci.go:107] Successfully prepared a docker volume addons-039972
	I0319 18:25:48.510976  454179 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0319 18:25:48.510995  454179 kic.go:194] Starting extracting preloaded images to volume ...
	I0319 18:25:48.511053  454179 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20544-448023/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-039972:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -I lz4 -xf /preloaded.tar -C /extractDir
	I0319 18:25:52.828471  454179 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20544-448023/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-039972:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -I lz4 -xf /preloaded.tar -C /extractDir: (4.317375374s)
	I0319 18:25:52.828503  454179 kic.go:203] duration metric: took 4.317504597s to extract preloaded images to volume ...
	W0319 18:25:52.828652  454179 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0319 18:25:52.828762  454179 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0319 18:25:52.889002  454179 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-039972 --name addons-039972 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-039972 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-039972 --network addons-039972 --ip 192.168.49.2 --volume addons-039972:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185
	I0319 18:25:53.209326  454179 cli_runner.go:164] Run: docker container inspect addons-039972 --format={{.State.Running}}
	I0319 18:25:53.229665  454179 cli_runner.go:164] Run: docker container inspect addons-039972 --format={{.State.Status}}
	I0319 18:25:53.254967  454179 cli_runner.go:164] Run: docker exec addons-039972 stat /var/lib/dpkg/alternatives/iptables
	I0319 18:25:53.307450  454179 oci.go:144] the created container "addons-039972" has a running status.
	I0319 18:25:53.307478  454179 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20544-448023/.minikube/machines/addons-039972/id_rsa...
	I0319 18:25:54.392039  454179 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20544-448023/.minikube/machines/addons-039972/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0319 18:25:54.421617  454179 cli_runner.go:164] Run: docker container inspect addons-039972 --format={{.State.Status}}
	I0319 18:25:54.440271  454179 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0319 18:25:54.440296  454179 kic_runner.go:114] Args: [docker exec --privileged addons-039972 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0319 18:25:54.481262  454179 cli_runner.go:164] Run: docker container inspect addons-039972 --format={{.State.Status}}
	I0319 18:25:54.498974  454179 machine.go:93] provisionDockerMachine start ...
	I0319 18:25:54.499081  454179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-039972
	I0319 18:25:54.516193  454179 main.go:141] libmachine: Using SSH client type: native
	I0319 18:25:54.516519  454179 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil>  [] 0s} 127.0.0.1 33163 <nil> <nil>}
	I0319 18:25:54.516529  454179 main.go:141] libmachine: About to run SSH command:
	hostname
	I0319 18:25:54.644857  454179 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-039972
	
	I0319 18:25:54.644881  454179 ubuntu.go:169] provisioning hostname "addons-039972"
	I0319 18:25:54.644945  454179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-039972
	I0319 18:25:54.664905  454179 main.go:141] libmachine: Using SSH client type: native
	I0319 18:25:54.665218  454179 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil>  [] 0s} 127.0.0.1 33163 <nil> <nil>}
	I0319 18:25:54.665235  454179 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-039972 && echo "addons-039972" | sudo tee /etc/hostname
	I0319 18:25:54.797301  454179 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-039972
	
	I0319 18:25:54.797420  454179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-039972
	I0319 18:25:54.815044  454179 main.go:141] libmachine: Using SSH client type: native
	I0319 18:25:54.815366  454179 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil>  [] 0s} 127.0.0.1 33163 <nil> <nil>}
	I0319 18:25:54.815388  454179 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-039972' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-039972/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-039972' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0319 18:25:54.937854  454179 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0319 18:25:54.937880  454179 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20544-448023/.minikube CaCertPath:/home/jenkins/minikube-integration/20544-448023/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20544-448023/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20544-448023/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20544-448023/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20544-448023/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20544-448023/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20544-448023/.minikube}
	I0319 18:25:54.937906  454179 ubuntu.go:177] setting up certificates
	I0319 18:25:54.937919  454179 provision.go:84] configureAuth start
	I0319 18:25:54.937984  454179 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-039972
	I0319 18:25:54.955870  454179 provision.go:143] copyHostCerts
	I0319 18:25:54.955961  454179 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20544-448023/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20544-448023/.minikube/ca.pem (1082 bytes)
	I0319 18:25:54.956088  454179 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20544-448023/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20544-448023/.minikube/cert.pem (1123 bytes)
	I0319 18:25:54.956156  454179 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20544-448023/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20544-448023/.minikube/key.pem (1679 bytes)
	I0319 18:25:54.956215  454179 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20544-448023/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20544-448023/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20544-448023/.minikube/certs/ca-key.pem org=jenkins.addons-039972 san=[127.0.0.1 192.168.49.2 addons-039972 localhost minikube]
	I0319 18:25:55.584080  454179 provision.go:177] copyRemoteCerts
	I0319 18:25:55.584154  454179 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0319 18:25:55.584203  454179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-039972
	I0319 18:25:55.601281  454179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/20544-448023/.minikube/machines/addons-039972/id_rsa Username:docker}
	I0319 18:25:55.691015  454179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20544-448023/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0319 18:25:55.716058  454179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20544-448023/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0319 18:25:55.740107  454179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20544-448023/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0319 18:25:55.763630  454179 provision.go:87] duration metric: took 825.694662ms to configureAuth
	I0319 18:25:55.763656  454179 ubuntu.go:193] setting minikube options for container-runtime
	I0319 18:25:55.763835  454179 config.go:182] Loaded profile config "addons-039972": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0319 18:25:55.763945  454179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-039972
	I0319 18:25:55.780810  454179 main.go:141] libmachine: Using SSH client type: native
	I0319 18:25:55.781125  454179 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil>  [] 0s} 127.0.0.1 33163 <nil> <nil>}
	I0319 18:25:55.781148  454179 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0319 18:25:56.010755  454179 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0319 18:25:56.010777  454179 machine.go:96] duration metric: took 1.511781693s to provisionDockerMachine
	I0319 18:25:56.010788  454179 client.go:171] duration metric: took 10.958904946s to LocalClient.Create
	I0319 18:25:56.010809  454179 start.go:167] duration metric: took 10.958982337s to libmachine.API.Create "addons-039972"
	I0319 18:25:56.010817  454179 start.go:293] postStartSetup for "addons-039972" (driver="docker")
	I0319 18:25:56.010827  454179 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0319 18:25:56.010899  454179 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0319 18:25:56.010980  454179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-039972
	I0319 18:25:56.030426  454179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/20544-448023/.minikube/machines/addons-039972/id_rsa Username:docker}
	I0319 18:25:56.127204  454179 ssh_runner.go:195] Run: cat /etc/os-release
	I0319 18:25:56.130431  454179 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0319 18:25:56.130467  454179 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0319 18:25:56.130504  454179 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0319 18:25:56.130516  454179 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0319 18:25:56.130532  454179 filesync.go:126] Scanning /home/jenkins/minikube-integration/20544-448023/.minikube/addons for local assets ...
	I0319 18:25:56.130690  454179 filesync.go:126] Scanning /home/jenkins/minikube-integration/20544-448023/.minikube/files for local assets ...
	I0319 18:25:56.130725  454179 start.go:296] duration metric: took 119.902309ms for postStartSetup
	I0319 18:25:56.131094  454179 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-039972
	I0319 18:25:56.148413  454179 profile.go:143] Saving config to /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/addons-039972/config.json ...
	I0319 18:25:56.148771  454179 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0319 18:25:56.148826  454179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-039972
	I0319 18:25:56.165696  454179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/20544-448023/.minikube/machines/addons-039972/id_rsa Username:docker}
	I0319 18:25:56.254904  454179 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0319 18:25:56.259808  454179 start.go:128] duration metric: took 11.212678059s to createHost
	I0319 18:25:56.259829  454179 start.go:83] releasing machines lock for "addons-039972", held for 11.212887536s
	I0319 18:25:56.259898  454179 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-039972
	I0319 18:25:56.276362  454179 ssh_runner.go:195] Run: cat /version.json
	I0319 18:25:56.276388  454179 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0319 18:25:56.276412  454179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-039972
	I0319 18:25:56.276454  454179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-039972
	I0319 18:25:56.293998  454179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/20544-448023/.minikube/machines/addons-039972/id_rsa Username:docker}
	I0319 18:25:56.301719  454179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/20544-448023/.minikube/machines/addons-039972/id_rsa Username:docker}
	I0319 18:25:56.514462  454179 ssh_runner.go:195] Run: systemctl --version
	I0319 18:25:56.518913  454179 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0319 18:25:56.661757  454179 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0319 18:25:56.665985  454179 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0319 18:25:56.690702  454179 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0319 18:25:56.690792  454179 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0319 18:25:56.721607  454179 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0319 18:25:56.721629  454179 start.go:495] detecting cgroup driver to use...
	I0319 18:25:56.721662  454179 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0319 18:25:56.721712  454179 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0319 18:25:56.737528  454179 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0319 18:25:56.749430  454179 docker.go:217] disabling cri-docker service (if available) ...
	I0319 18:25:56.749503  454179 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0319 18:25:56.763922  454179 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0319 18:25:56.778173  454179 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0319 18:25:56.867109  454179 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0319 18:25:56.963897  454179 docker.go:233] disabling docker service ...
	I0319 18:25:56.963966  454179 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0319 18:25:56.985301  454179 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0319 18:25:56.996880  454179 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0319 18:25:57.088323  454179 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0319 18:25:57.185723  454179 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0319 18:25:57.198342  454179 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0319 18:25:57.215246  454179 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0319 18:25:57.215356  454179 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 18:25:57.225286  454179 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0319 18:25:57.225386  454179 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 18:25:57.235290  454179 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 18:25:57.245558  454179 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 18:25:57.255988  454179 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0319 18:25:57.265814  454179 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 18:25:57.276447  454179 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 18:25:57.293250  454179 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 18:25:57.302968  454179 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0319 18:25:57.312006  454179 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0319 18:25:57.320305  454179 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 18:25:57.421915  454179 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0319 18:25:57.541066  454179 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0319 18:25:57.541161  454179 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0319 18:25:57.544634  454179 start.go:563] Will wait 60s for crictl version
	I0319 18:25:57.544704  454179 ssh_runner.go:195] Run: which crictl
	I0319 18:25:57.547977  454179 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0319 18:25:57.585860  454179 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0319 18:25:57.585976  454179 ssh_runner.go:195] Run: crio --version
	I0319 18:25:57.622972  454179 ssh_runner.go:195] Run: crio --version
	I0319 18:25:57.663905  454179 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.24.6 ...
	I0319 18:25:57.666810  454179 cli_runner.go:164] Run: docker network inspect addons-039972 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0319 18:25:57.683728  454179 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0319 18:25:57.687376  454179 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 18:25:57.698242  454179 kubeadm.go:883] updating cluster {Name:addons-039972 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-039972 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0319 18:25:57.698356  454179 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0319 18:25:57.698426  454179 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 18:25:57.775956  454179 crio.go:514] all images are preloaded for cri-o runtime.
	I0319 18:25:57.775980  454179 crio.go:433] Images already preloaded, skipping extraction
	I0319 18:25:57.776041  454179 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 18:25:57.811982  454179 crio.go:514] all images are preloaded for cri-o runtime.
	I0319 18:25:57.812006  454179 cache_images.go:84] Images are preloaded, skipping loading
	I0319 18:25:57.812017  454179 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.32.2 crio true true} ...
	I0319 18:25:57.812099  454179 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-039972 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:addons-039972 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0319 18:25:57.812181  454179 ssh_runner.go:195] Run: crio config
	I0319 18:25:57.879783  454179 cni.go:84] Creating CNI manager for ""
	I0319 18:25:57.879818  454179 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0319 18:25:57.879838  454179 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0319 18:25:57.879862  454179 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-039972 NodeName:addons-039972 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0319 18:25:57.879992  454179 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-039972"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0319 18:25:57.880063  454179 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0319 18:25:57.888523  454179 binaries.go:44] Found k8s binaries, skipping transfer
	I0319 18:25:57.888596  454179 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0319 18:25:57.897034  454179 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0319 18:25:57.914867  454179 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0319 18:25:57.932535  454179 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2287 bytes)
	I0319 18:25:57.950269  454179 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0319 18:25:57.953553  454179 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 18:25:57.964293  454179 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 18:25:58.056805  454179 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0319 18:25:58.071318  454179 certs.go:68] Setting up /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/addons-039972 for IP: 192.168.49.2
	I0319 18:25:58.071341  454179 certs.go:194] generating shared ca certs ...
	I0319 18:25:58.071357  454179 certs.go:226] acquiring lock for ca certs: {Name:mkd8a6899d1e79d8873b3a9b4a64f23be9e68740 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 18:25:58.071482  454179 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20544-448023/.minikube/ca.key
	I0319 18:25:58.602593  454179 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20544-448023/.minikube/ca.crt ...
	I0319 18:25:58.602644  454179 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20544-448023/.minikube/ca.crt: {Name:mk8738d020e71f3624ac7c8f3ef30acc21b96f80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 18:25:58.602833  454179 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20544-448023/.minikube/ca.key ...
	I0319 18:25:58.602846  454179 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20544-448023/.minikube/ca.key: {Name:mk7683c6d989bd681a537997e3c095fa8b419443 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 18:25:58.602933  454179 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20544-448023/.minikube/proxy-client-ca.key
	I0319 18:25:59.022018  454179 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20544-448023/.minikube/proxy-client-ca.crt ...
	I0319 18:25:59.022046  454179 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20544-448023/.minikube/proxy-client-ca.crt: {Name:mkeda1dea44b020b10bdf62e2a5a8bcd7671d117 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 18:25:59.022245  454179 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20544-448023/.minikube/proxy-client-ca.key ...
	I0319 18:25:59.022261  454179 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20544-448023/.minikube/proxy-client-ca.key: {Name:mk9844cee1bf52597bf1cc1d3b9a2aa30b24669b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 18:25:59.022344  454179 certs.go:256] generating profile certs ...
	I0319 18:25:59.022405  454179 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/addons-039972/client.key
	I0319 18:25:59.022421  454179 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/addons-039972/client.crt with IP's: []
	I0319 18:25:59.231307  454179 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/addons-039972/client.crt ...
	I0319 18:25:59.231337  454179 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/addons-039972/client.crt: {Name:mk4d37736c06a80fadc54918c022fa1ae2fde99a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 18:25:59.231515  454179 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/addons-039972/client.key ...
	I0319 18:25:59.231528  454179 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/addons-039972/client.key: {Name:mk804c87cf26b48751bdbd8216c8a6a3304f089b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 18:25:59.231613  454179 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/addons-039972/apiserver.key.ff324a97
	I0319 18:25:59.231632  454179 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/addons-039972/apiserver.crt.ff324a97 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0319 18:26:00.313544  454179 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/addons-039972/apiserver.crt.ff324a97 ...
	I0319 18:26:00.313636  454179 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/addons-039972/apiserver.crt.ff324a97: {Name:mkb38d106c1bb9a074008f09e0ce9a6632269cbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 18:26:00.313898  454179 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/addons-039972/apiserver.key.ff324a97 ...
	I0319 18:26:00.313941  454179 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/addons-039972/apiserver.key.ff324a97: {Name:mkfbd5e1fcf148af4f17f167e368c484438d4803 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 18:26:00.314876  454179 certs.go:381] copying /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/addons-039972/apiserver.crt.ff324a97 -> /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/addons-039972/apiserver.crt
	I0319 18:26:00.315058  454179 certs.go:385] copying /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/addons-039972/apiserver.key.ff324a97 -> /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/addons-039972/apiserver.key
	I0319 18:26:00.315184  454179 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/addons-039972/proxy-client.key
	I0319 18:26:00.315230  454179 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/addons-039972/proxy-client.crt with IP's: []
	I0319 18:26:00.718332  454179 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/addons-039972/proxy-client.crt ...
	I0319 18:26:00.718368  454179 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/addons-039972/proxy-client.crt: {Name:mk3bf86fe0324cf15b914a57a0152cea1a455628 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 18:26:00.718561  454179 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/addons-039972/proxy-client.key ...
	I0319 18:26:00.718576  454179 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/addons-039972/proxy-client.key: {Name:mk7e5edf6ed3a63fc6be9a3ef9112bb5e423da14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 18:26:00.718772  454179 certs.go:484] found cert: /home/jenkins/minikube-integration/20544-448023/.minikube/certs/ca-key.pem (1675 bytes)
	I0319 18:26:00.718815  454179 certs.go:484] found cert: /home/jenkins/minikube-integration/20544-448023/.minikube/certs/ca.pem (1082 bytes)
	I0319 18:26:00.718846  454179 certs.go:484] found cert: /home/jenkins/minikube-integration/20544-448023/.minikube/certs/cert.pem (1123 bytes)
	I0319 18:26:00.718876  454179 certs.go:484] found cert: /home/jenkins/minikube-integration/20544-448023/.minikube/certs/key.pem (1679 bytes)
	I0319 18:26:00.719452  454179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20544-448023/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0319 18:26:00.743993  454179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20544-448023/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0319 18:26:00.768062  454179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20544-448023/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0319 18:26:00.791779  454179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20544-448023/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0319 18:26:00.815284  454179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/addons-039972/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0319 18:26:00.839661  454179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/addons-039972/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0319 18:26:00.863649  454179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/addons-039972/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0319 18:26:00.887946  454179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/addons-039972/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0319 18:26:00.911464  454179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20544-448023/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0319 18:26:00.935724  454179 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0319 18:26:00.953148  454179 ssh_runner.go:195] Run: openssl version
	I0319 18:26:00.958656  454179 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0319 18:26:00.968153  454179 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0319 18:26:00.971527  454179 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 19 18:25 /usr/share/ca-certificates/minikubeCA.pem
	I0319 18:26:00.971611  454179 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0319 18:26:00.978461  454179 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0319 18:26:00.987717  454179 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0319 18:26:00.991100  454179 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0319 18:26:00.991181  454179 kubeadm.go:392] StartCluster: {Name:addons-039972 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-039972 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 18:26:00.991276  454179 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0319 18:26:00.991346  454179 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0319 18:26:01.030272  454179 cri.go:89] found id: ""
	I0319 18:26:01.030392  454179 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0319 18:26:01.039412  454179 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0319 18:26:01.048131  454179 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0319 18:26:01.048231  454179 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0319 18:26:01.056997  454179 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0319 18:26:01.057016  454179 kubeadm.go:157] found existing configuration files:
	
	I0319 18:26:01.057095  454179 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0319 18:26:01.065643  454179 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0319 18:26:01.065708  454179 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0319 18:26:01.074361  454179 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0319 18:26:01.083289  454179 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0319 18:26:01.083379  454179 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0319 18:26:01.091953  454179 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0319 18:26:01.100943  454179 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0319 18:26:01.101050  454179 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0319 18:26:01.109904  454179 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0319 18:26:01.119729  454179 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0319 18:26:01.119810  454179 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0319 18:26:01.128992  454179 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0319 18:26:01.172611  454179 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0319 18:26:01.172675  454179 kubeadm.go:310] [preflight] Running pre-flight checks
	I0319 18:26:01.210782  454179 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0319 18:26:01.210868  454179 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1077-aws
	I0319 18:26:01.210906  454179 kubeadm.go:310] OS: Linux
	I0319 18:26:01.210952  454179 kubeadm.go:310] CGROUPS_CPU: enabled
	I0319 18:26:01.211000  454179 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0319 18:26:01.211047  454179 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0319 18:26:01.211095  454179 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0319 18:26:01.211143  454179 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0319 18:26:01.211190  454179 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0319 18:26:01.211235  454179 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0319 18:26:01.211290  454179 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0319 18:26:01.211336  454179 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0319 18:26:01.287079  454179 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0319 18:26:01.287196  454179 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0319 18:26:01.287300  454179 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0319 18:26:01.293961  454179 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0319 18:26:01.300859  454179 out.go:235]   - Generating certificates and keys ...
	I0319 18:26:01.300969  454179 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0319 18:26:01.301038  454179 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0319 18:26:02.279445  454179 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0319 18:26:03.244287  454179 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0319 18:26:03.813214  454179 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0319 18:26:04.862390  454179 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0319 18:26:05.275450  454179 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0319 18:26:05.275586  454179 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-039972 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0319 18:26:05.889184  454179 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0319 18:26:05.889315  454179 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-039972 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0319 18:26:06.169763  454179 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0319 18:26:06.878735  454179 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0319 18:26:07.340086  454179 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0319 18:26:07.340432  454179 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0319 18:26:07.990054  454179 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0319 18:26:08.573330  454179 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0319 18:26:08.871150  454179 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0319 18:26:09.689864  454179 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0319 18:26:10.266776  454179 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0319 18:26:10.267614  454179 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0319 18:26:10.270728  454179 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0319 18:26:10.274185  454179 out.go:235]   - Booting up control plane ...
	I0319 18:26:10.274291  454179 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0319 18:26:10.274365  454179 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0319 18:26:10.274427  454179 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0319 18:26:10.284560  454179 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0319 18:26:10.291796  454179 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0319 18:26:10.291850  454179 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0319 18:26:10.378012  454179 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0319 18:26:10.378136  454179 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0319 18:26:11.401810  454179 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.02351764s
	I0319 18:26:11.401899  454179 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0319 18:26:16.902814  454179 kubeadm.go:310] [api-check] The API server is healthy after 5.501357011s
	I0319 18:26:16.935574  454179 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0319 18:26:16.950402  454179 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0319 18:26:16.979031  454179 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0319 18:26:16.979226  454179 kubeadm.go:310] [mark-control-plane] Marking the node addons-039972 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0319 18:26:16.989695  454179 kubeadm.go:310] [bootstrap-token] Using token: sgp5zw.3vxnhchojqhgyaji
	I0319 18:26:16.992819  454179 out.go:235]   - Configuring RBAC rules ...
	I0319 18:26:16.992949  454179 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0319 18:26:16.998338  454179 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0319 18:26:17.006652  454179 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0319 18:26:17.011199  454179 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0319 18:26:17.015449  454179 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0319 18:26:17.021137  454179 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0319 18:26:17.310847  454179 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0319 18:26:17.746112  454179 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0319 18:26:18.309880  454179 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0319 18:26:18.310988  454179 kubeadm.go:310] 
	I0319 18:26:18.311066  454179 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0319 18:26:18.311074  454179 kubeadm.go:310] 
	I0319 18:26:18.311146  454179 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0319 18:26:18.311151  454179 kubeadm.go:310] 
	I0319 18:26:18.311175  454179 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0319 18:26:18.311229  454179 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0319 18:26:18.311276  454179 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0319 18:26:18.311281  454179 kubeadm.go:310] 
	I0319 18:26:18.311331  454179 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0319 18:26:18.311335  454179 kubeadm.go:310] 
	I0319 18:26:18.311380  454179 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0319 18:26:18.311384  454179 kubeadm.go:310] 
	I0319 18:26:18.311432  454179 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0319 18:26:18.311502  454179 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0319 18:26:18.311565  454179 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0319 18:26:18.311569  454179 kubeadm.go:310] 
	I0319 18:26:18.311648  454179 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0319 18:26:18.311720  454179 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0319 18:26:18.311725  454179 kubeadm.go:310] 
	I0319 18:26:18.311805  454179 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token sgp5zw.3vxnhchojqhgyaji \
	I0319 18:26:18.311901  454179 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:eee6d2662606a6b1d2bdaccf5c7670d9b08bf16ae684ff3aea37ad2acc71adcf \
	I0319 18:26:18.311921  454179 kubeadm.go:310] 	--control-plane 
	I0319 18:26:18.311931  454179 kubeadm.go:310] 
	I0319 18:26:18.312011  454179 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0319 18:26:18.312015  454179 kubeadm.go:310] 
	I0319 18:26:18.312091  454179 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token sgp5zw.3vxnhchojqhgyaji \
	I0319 18:26:18.312186  454179 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:eee6d2662606a6b1d2bdaccf5c7670d9b08bf16ae684ff3aea37ad2acc71adcf 
	I0319 18:26:18.315946  454179 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0319 18:26:18.316177  454179 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1077-aws\n", err: exit status 1
	I0319 18:26:18.316301  454179 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0319 18:26:18.316321  454179 cni.go:84] Creating CNI manager for ""
	I0319 18:26:18.316330  454179 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0319 18:26:18.321287  454179 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0319 18:26:18.324239  454179 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0319 18:26:18.328431  454179 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0319 18:26:18.328448  454179 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0319 18:26:18.347272  454179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0319 18:26:18.625849  454179 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0319 18:26:18.625945  454179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 18:26:18.626041  454179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-039972 minikube.k8s.io/updated_at=2025_03_19T18_26_18_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=d76a625434f413a89ad1bb610dea10300ea9201f minikube.k8s.io/name=addons-039972 minikube.k8s.io/primary=true
	I0319 18:26:18.735873  454179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 18:26:18.735967  454179 ops.go:34] apiserver oom_adj: -16
	I0319 18:26:19.235994  454179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 18:26:19.736691  454179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 18:26:20.236499  454179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 18:26:20.736620  454179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 18:26:21.235988  454179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 18:26:21.736850  454179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 18:26:22.236299  454179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 18:26:22.735934  454179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0319 18:26:22.879102  454179 kubeadm.go:1113] duration metric: took 4.253219657s to wait for elevateKubeSystemPrivileges
	I0319 18:26:22.879129  454179 kubeadm.go:394] duration metric: took 21.887951173s to StartCluster
	I0319 18:26:22.879146  454179 settings.go:142] acquiring lock: {Name:mk7bcf22d5090743d25ff681e3c908a88736d42a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 18:26:22.880801  454179 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20544-448023/kubeconfig
	I0319 18:26:22.881241  454179 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20544-448023/kubeconfig: {Name:mk54867cb0e9cc74fa0dd9ec986d9fb8d5ff5dff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 18:26:22.881439  454179 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0319 18:26:22.881608  454179 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0319 18:26:22.881927  454179 config.go:182] Loaded profile config "addons-039972": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0319 18:26:22.881962  454179 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0319 18:26:22.882032  454179 addons.go:69] Setting yakd=true in profile "addons-039972"
	I0319 18:26:22.882057  454179 addons.go:238] Setting addon yakd=true in "addons-039972"
	I0319 18:26:22.882080  454179 host.go:66] Checking if "addons-039972" exists ...
	I0319 18:26:22.882564  454179 cli_runner.go:164] Run: docker container inspect addons-039972 --format={{.State.Status}}
	I0319 18:26:22.883145  454179 addons.go:69] Setting inspektor-gadget=true in profile "addons-039972"
	I0319 18:26:22.883161  454179 addons.go:238] Setting addon inspektor-gadget=true in "addons-039972"
	I0319 18:26:22.883183  454179 host.go:66] Checking if "addons-039972" exists ...
	I0319 18:26:22.883593  454179 cli_runner.go:164] Run: docker container inspect addons-039972 --format={{.State.Status}}
	I0319 18:26:22.883956  454179 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-039972"
	I0319 18:26:22.883982  454179 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-039972"
	I0319 18:26:22.884006  454179 host.go:66] Checking if "addons-039972" exists ...
	I0319 18:26:22.884438  454179 cli_runner.go:164] Run: docker container inspect addons-039972 --format={{.State.Status}}
	I0319 18:26:22.886244  454179 addons.go:69] Setting metrics-server=true in profile "addons-039972"
	I0319 18:26:22.887525  454179 addons.go:238] Setting addon metrics-server=true in "addons-039972"
	I0319 18:26:22.887685  454179 host.go:66] Checking if "addons-039972" exists ...
	I0319 18:26:22.886399  454179 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-039972"
	I0319 18:26:22.888638  454179 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-039972"
	I0319 18:26:22.888666  454179 host.go:66] Checking if "addons-039972" exists ...
	I0319 18:26:22.889080  454179 cli_runner.go:164] Run: docker container inspect addons-039972 --format={{.State.Status}}
	I0319 18:26:22.890454  454179 cli_runner.go:164] Run: docker container inspect addons-039972 --format={{.State.Status}}
	I0319 18:26:22.886408  454179 addons.go:69] Setting registry=true in profile "addons-039972"
	I0319 18:26:22.886412  454179 addons.go:69] Setting storage-provisioner=true in profile "addons-039972"
	I0319 18:26:22.886416  454179 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-039972"
	I0319 18:26:22.886419  454179 addons.go:69] Setting volcano=true in profile "addons-039972"
	I0319 18:26:22.886427  454179 addons.go:69] Setting volumesnapshots=true in profile "addons-039972"
	I0319 18:26:22.886477  454179 out.go:177] * Verifying Kubernetes components...
	I0319 18:26:22.887408  454179 addons.go:69] Setting gcp-auth=true in profile "addons-039972"
	I0319 18:26:22.887417  454179 addons.go:69] Setting cloud-spanner=true in profile "addons-039972"
	I0319 18:26:22.887456  454179 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-039972"
	I0319 18:26:22.887460  454179 addons.go:69] Setting default-storageclass=true in profile "addons-039972"
	I0319 18:26:22.887467  454179 addons.go:69] Setting ingress-dns=true in profile "addons-039972"
	I0319 18:26:22.887472  454179 addons.go:69] Setting ingress=true in profile "addons-039972"
	I0319 18:26:22.890698  454179 addons.go:238] Setting addon ingress=true in "addons-039972"
	I0319 18:26:22.890732  454179 host.go:66] Checking if "addons-039972" exists ...
	I0319 18:26:22.891123  454179 cli_runner.go:164] Run: docker container inspect addons-039972 --format={{.State.Status}}
	I0319 18:26:22.902537  454179 addons.go:238] Setting addon volumesnapshots=true in "addons-039972"
	I0319 18:26:22.902649  454179 host.go:66] Checking if "addons-039972" exists ...
	I0319 18:26:22.903215  454179 cli_runner.go:164] Run: docker container inspect addons-039972 --format={{.State.Status}}
	I0319 18:26:22.924816  454179 mustload.go:65] Loading cluster: addons-039972
	I0319 18:26:22.925220  454179 config.go:182] Loaded profile config "addons-039972": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0319 18:26:22.925546  454179 cli_runner.go:164] Run: docker container inspect addons-039972 --format={{.State.Status}}
	I0319 18:26:22.939184  454179 addons.go:238] Setting addon cloud-spanner=true in "addons-039972"
	I0319 18:26:22.939242  454179 host.go:66] Checking if "addons-039972" exists ...
	I0319 18:26:22.939322  454179 addons.go:238] Setting addon registry=true in "addons-039972"
	I0319 18:26:22.939344  454179 host.go:66] Checking if "addons-039972" exists ...
	I0319 18:26:22.939862  454179 cli_runner.go:164] Run: docker container inspect addons-039972 --format={{.State.Status}}
	I0319 18:26:22.951206  454179 addons.go:238] Setting addon storage-provisioner=true in "addons-039972"
	I0319 18:26:22.951255  454179 host.go:66] Checking if "addons-039972" exists ...
	I0319 18:26:22.951728  454179 cli_runner.go:164] Run: docker container inspect addons-039972 --format={{.State.Status}}
	I0319 18:26:22.956893  454179 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-039972"
	I0319 18:26:22.956986  454179 host.go:66] Checking if "addons-039972" exists ...
	I0319 18:26:22.966746  454179 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-039972"
	I0319 18:26:22.967102  454179 cli_runner.go:164] Run: docker container inspect addons-039972 --format={{.State.Status}}
	I0319 18:26:22.967810  454179 cli_runner.go:164] Run: docker container inspect addons-039972 --format={{.State.Status}}
	I0319 18:26:22.985834  454179 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-039972"
	I0319 18:26:22.986239  454179 cli_runner.go:164] Run: docker container inspect addons-039972 --format={{.State.Status}}
	I0319 18:26:22.986889  454179 addons.go:238] Setting addon volcano=true in "addons-039972"
	I0319 18:26:22.986934  454179 host.go:66] Checking if "addons-039972" exists ...
	I0319 18:26:22.987353  454179 cli_runner.go:164] Run: docker container inspect addons-039972 --format={{.State.Status}}
	I0319 18:26:23.006934  454179 addons.go:238] Setting addon ingress-dns=true in "addons-039972"
	I0319 18:26:23.007033  454179 host.go:66] Checking if "addons-039972" exists ...
	I0319 18:26:23.008143  454179 cli_runner.go:164] Run: docker container inspect addons-039972 --format={{.State.Status}}
	I0319 18:26:23.058724  454179 cli_runner.go:164] Run: docker container inspect addons-039972 --format={{.State.Status}}
	I0319 18:26:23.100688  454179 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 18:26:23.117922  454179 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0319 18:26:23.139108  454179 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.37.0
	I0319 18:26:23.149977  454179 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0319 18:26:23.151213  454179 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I0319 18:26:23.153187  454179 host.go:66] Checking if "addons-039972" exists ...
	I0319 18:26:23.153329  454179 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0319 18:26:23.153349  454179 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0319 18:26:23.153407  454179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-039972
	I0319 18:26:23.166460  454179 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0319 18:26:23.166534  454179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0319 18:26:23.166627  454179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-039972
	I0319 18:26:23.168618  454179 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0319 18:26:23.175954  454179 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0319 18:26:23.175981  454179 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0319 18:26:23.176046  454179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-039972
	I0319 18:26:23.184824  454179 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0319 18:26:23.184852  454179 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I0319 18:26:23.184916  454179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-039972
	I0319 18:26:23.193948  454179 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I0319 18:26:23.194316  454179 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0319 18:26:23.194332  454179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0319 18:26:23.194400  454179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-039972
	I0319 18:26:23.218684  454179 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0319 18:26:23.224433  454179 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0319 18:26:23.227244  454179 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0319 18:26:23.227592  454179 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I0319 18:26:23.228803  454179 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0319 18:26:23.236914  454179 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0319 18:26:23.241716  454179 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0319 18:26:23.241741  454179 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0319 18:26:23.241873  454179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-039972
	I0319 18:26:23.249243  454179 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0319 18:26:23.249480  454179 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0319 18:26:23.256181  454179 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0319 18:26:23.256530  454179 out.go:177]   - Using image docker.io/registry:2.8.3
	I0319 18:26:23.256872  454179 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0319 18:26:23.256922  454179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0319 18:26:23.257016  454179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-039972
	I0319 18:26:23.257686  454179 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-039972"
	I0319 18:26:23.257767  454179 host.go:66] Checking if "addons-039972" exists ...
	I0319 18:26:23.258343  454179 cli_runner.go:164] Run: docker container inspect addons-039972 --format={{.State.Status}}
	I0319 18:26:23.282092  454179 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0319 18:26:23.282113  454179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0319 18:26:23.282173  454179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-039972
	I0319 18:26:23.309776  454179 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0319 18:26:23.313698  454179 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 18:26:23.321877  454179 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0319 18:26:23.321902  454179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0319 18:26:23.321969  454179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-039972
	I0319 18:26:23.331113  454179 addons.go:238] Setting addon default-storageclass=true in "addons-039972"
	I0319 18:26:23.331223  454179 host.go:66] Checking if "addons-039972" exists ...
	I0319 18:26:23.338180  454179 cli_runner.go:164] Run: docker container inspect addons-039972 --format={{.State.Status}}
	W0319 18:26:23.339235  454179 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0319 18:26:23.350618  454179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/20544-448023/.minikube/machines/addons-039972/id_rsa Username:docker}
	I0319 18:26:23.351593  454179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/20544-448023/.minikube/machines/addons-039972/id_rsa Username:docker}
	I0319 18:26:23.359995  454179 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0319 18:26:23.373729  454179 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.30
	I0319 18:26:23.378801  454179 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0319 18:26:23.387833  454179 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0319 18:26:23.387953  454179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0319 18:26:23.388027  454179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-039972
	I0319 18:26:23.387863  454179 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0319 18:26:23.389270  454179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/20544-448023/.minikube/machines/addons-039972/id_rsa Username:docker}
	I0319 18:26:23.401489  454179 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0319 18:26:23.401506  454179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0319 18:26:23.401568  454179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-039972
	I0319 18:26:23.407102  454179 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0319 18:26:23.421378  454179 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0319 18:26:23.421409  454179 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0319 18:26:23.421485  454179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-039972
	I0319 18:26:23.436432  454179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/20544-448023/.minikube/machines/addons-039972/id_rsa Username:docker}
	I0319 18:26:23.462963  454179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/20544-448023/.minikube/machines/addons-039972/id_rsa Username:docker}
	I0319 18:26:23.464899  454179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/20544-448023/.minikube/machines/addons-039972/id_rsa Username:docker}
	I0319 18:26:23.467708  454179 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0319 18:26:23.470626  454179 out.go:177]   - Using image docker.io/busybox:stable
	I0319 18:26:23.474943  454179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/20544-448023/.minikube/machines/addons-039972/id_rsa Username:docker}
	I0319 18:26:23.475671  454179 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0319 18:26:23.475731  454179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0319 18:26:23.475809  454179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-039972
	I0319 18:26:23.486181  454179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/20544-448023/.minikube/machines/addons-039972/id_rsa Username:docker}
	I0319 18:26:23.554463  454179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/20544-448023/.minikube/machines/addons-039972/id_rsa Username:docker}
	I0319 18:26:23.561996  454179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/20544-448023/.minikube/machines/addons-039972/id_rsa Username:docker}
	I0319 18:26:23.577956  454179 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0319 18:26:23.577978  454179 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0319 18:26:23.578043  454179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-039972
	I0319 18:26:23.603363  454179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/20544-448023/.minikube/machines/addons-039972/id_rsa Username:docker}
	I0319 18:26:23.632983  454179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/20544-448023/.minikube/machines/addons-039972/id_rsa Username:docker}
	I0319 18:26:23.634976  454179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/20544-448023/.minikube/machines/addons-039972/id_rsa Username:docker}
	I0319 18:26:23.640875  454179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/20544-448023/.minikube/machines/addons-039972/id_rsa Username:docker}
	W0319 18:26:23.641850  454179 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0319 18:26:23.641885  454179 retry.go:31] will retry after 155.562774ms: ssh: handshake failed: EOF
	I0319 18:26:23.898446  454179 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0319 18:26:23.898511  454179 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0319 18:26:23.900632  454179 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0319 18:26:23.929624  454179 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0319 18:26:23.929650  454179 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0319 18:26:23.937726  454179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0319 18:26:23.960269  454179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0319 18:26:23.986623  454179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0319 18:26:23.993984  454179 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0319 18:26:23.994013  454179 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0319 18:26:24.125614  454179 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0319 18:26:24.125639  454179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14539 bytes)
	I0319 18:26:24.130344  454179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0319 18:26:24.133256  454179 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0319 18:26:24.133277  454179 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0319 18:26:24.135098  454179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0319 18:26:24.137016  454179 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0319 18:26:24.137038  454179 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0319 18:26:24.139239  454179 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0319 18:26:24.139260  454179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0319 18:26:24.149330  454179 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0319 18:26:24.149355  454179 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0319 18:26:24.160867  454179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0319 18:26:24.164521  454179 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0319 18:26:24.164549  454179 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0319 18:26:24.196034  454179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0319 18:26:24.201934  454179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0319 18:26:24.277020  454179 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0319 18:26:24.277046  454179 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0319 18:26:24.328148  454179 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0319 18:26:24.328172  454179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0319 18:26:24.335433  454179 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0319 18:26:24.335462  454179 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0319 18:26:24.339304  454179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0319 18:26:24.340797  454179 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0319 18:26:24.340818  454179 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0319 18:26:24.354498  454179 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0319 18:26:24.354525  454179 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0319 18:26:24.451088  454179 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0319 18:26:24.451115  454179 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0319 18:26:24.476467  454179 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0319 18:26:24.476502  454179 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0319 18:26:24.532549  454179 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0319 18:26:24.532572  454179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0319 18:26:24.538659  454179 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0319 18:26:24.538682  454179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0319 18:26:24.547039  454179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0319 18:26:24.641993  454179 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0319 18:26:24.642018  454179 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0319 18:26:24.654713  454179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0319 18:26:24.684797  454179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0319 18:26:24.736314  454179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0319 18:26:24.832690  454179 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0319 18:26:24.832725  454179 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0319 18:26:24.993017  454179 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0319 18:26:24.993089  454179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0319 18:26:25.190214  454179 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0319 18:26:25.190292  454179 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0319 18:26:25.307101  454179 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0319 18:26:25.307185  454179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0319 18:26:25.386970  454179 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0319 18:26:25.387040  454179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0319 18:26:25.516374  454179 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0319 18:26:25.516447  454179 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0319 18:26:25.609679  454179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0319 18:26:25.697345  454179 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.447839199s)
	I0319 18:26:25.697464  454179 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0319 18:26:25.697381  454179 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.796695239s)
	I0319 18:26:25.698801  454179 node_ready.go:35] waiting up to 6m0s for node "addons-039972" to be "Ready" ...
	I0319 18:26:26.490918  454179 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.553151179s)
	I0319 18:26:26.759095  454179 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-039972" context rescaled to 1 replicas
	I0319 18:26:28.124882  454179 node_ready.go:53] node "addons-039972" has status "Ready":"False"
	I0319 18:26:28.765281  454179 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.804969424s)
	I0319 18:26:28.765344  454179 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.778641974s)
	I0319 18:26:28.999926  454179 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.869542641s)
	I0319 18:26:29.859076  454179 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.723939143s)
	I0319 18:26:29.859488  454179 addons.go:479] Verifying addon ingress=true in "addons-039972"
	I0319 18:26:29.859142  454179 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.698249331s)
	I0319 18:26:29.859160  454179 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.663107726s)
	I0319 18:26:29.859224  454179 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.657268934s)
	I0319 18:26:29.859294  454179 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (5.51996701s)
	I0319 18:26:29.859320  454179 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.312256689s)
	I0319 18:26:29.859761  454179 addons.go:479] Verifying addon registry=true in "addons-039972"
	I0319 18:26:29.859376  454179 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.204637148s)
	I0319 18:26:29.860243  454179 addons.go:479] Verifying addon metrics-server=true in "addons-039972"
	I0319 18:26:29.859432  454179 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.174608629s)
	I0319 18:26:29.863002  454179 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-039972 service yakd-dashboard -n yakd-dashboard
	
	I0319 18:26:29.863103  454179 out.go:177] * Verifying ingress addon...
	I0319 18:26:29.863125  454179 out.go:177] * Verifying registry addon...
	I0319 18:26:29.868018  454179 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0319 18:26:29.868078  454179 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0319 18:26:29.903498  454179 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0319 18:26:29.903525  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:26:29.903993  454179 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0319 18:26:29.904045  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:26:30.061139  454179 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.324777934s)
	W0319 18:26:30.061240  454179 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0319 18:26:30.061291  454179 retry.go:31] will retry after 272.774371ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0319 18:26:30.207101  454179 node_ready.go:53] node "addons-039972" has status "Ready":"False"
	I0319 18:26:30.273402  454179 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.66362839s)
	I0319 18:26:30.273486  454179 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-039972"
	I0319 18:26:30.276937  454179 out.go:177] * Verifying csi-hostpath-driver addon...
	I0319 18:26:30.280716  454179 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0319 18:26:30.296670  454179 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0319 18:26:30.296750  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:26:30.334291  454179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0319 18:26:30.393933  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:26:30.394154  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:26:30.783824  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:26:30.874428  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:26:30.874540  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:26:31.284621  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:26:31.385493  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:26:31.385939  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:26:31.784529  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:26:31.871271  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:26:31.872190  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:26:32.284442  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:26:32.384988  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:26:32.385755  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:26:32.701836  454179 node_ready.go:53] node "addons-039972" has status "Ready":"False"
	I0319 18:26:32.795735  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:26:32.872755  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:26:32.873115  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:26:32.943654  454179 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0319 18:26:32.943744  454179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-039972
	I0319 18:26:32.964546  454179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/20544-448023/.minikube/machines/addons-039972/id_rsa Username:docker}
	I0319 18:26:33.071590  454179 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0319 18:26:33.081442  454179 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.747106207s)
	I0319 18:26:33.097617  454179 addons.go:238] Setting addon gcp-auth=true in "addons-039972"
	I0319 18:26:33.097667  454179 host.go:66] Checking if "addons-039972" exists ...
	I0319 18:26:33.098141  454179 cli_runner.go:164] Run: docker container inspect addons-039972 --format={{.State.Status}}
	I0319 18:26:33.115175  454179 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0319 18:26:33.115231  454179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-039972
	I0319 18:26:33.132508  454179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/20544-448023/.minikube/machines/addons-039972/id_rsa Username:docker}
	I0319 18:26:33.228430  454179 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0319 18:26:33.231458  454179 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0319 18:26:33.234208  454179 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0319 18:26:33.234233  454179 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0319 18:26:33.252378  454179 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0319 18:26:33.252400  454179 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0319 18:26:33.271825  454179 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0319 18:26:33.271847  454179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0319 18:26:33.284399  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:26:33.294993  454179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0319 18:26:33.374290  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:26:33.375078  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:26:33.799472  454179 addons.go:479] Verifying addon gcp-auth=true in "addons-039972"
	I0319 18:26:33.802584  454179 out.go:177] * Verifying gcp-auth addon...
	I0319 18:26:33.806208  454179 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0319 18:26:33.808429  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:26:33.817634  454179 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0319 18:26:33.817669  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:26:33.908683  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:26:33.908788  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:26:34.284010  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:26:34.309705  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:26:34.371977  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:26:34.372164  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:26:34.702171  454179 node_ready.go:53] node "addons-039972" has status "Ready":"False"
	I0319 18:26:34.784446  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:26:34.809033  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:26:34.871181  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:26:34.871735  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:26:35.284464  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:26:35.309035  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:26:35.370834  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:26:35.371253  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:26:35.783734  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:26:35.809255  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:26:35.871400  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:26:35.871729  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:26:36.284092  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:26:36.309964  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:26:36.371947  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:26:36.372420  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:26:36.702599  454179 node_ready.go:53] node "addons-039972" has status "Ready":"False"
	I0319 18:26:36.785185  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:26:36.810129  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:26:36.872669  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:26:36.873072  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:26:37.284901  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:26:37.309378  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:26:37.371302  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:26:37.371448  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:26:37.784051  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:26:37.809505  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:26:37.872018  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:26:37.872024  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:26:38.284064  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:26:38.309678  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:26:38.371481  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:26:38.371990  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:26:38.784098  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:26:38.809917  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:26:38.872228  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:26:38.872549  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:26:39.202736  454179 node_ready.go:53] node "addons-039972" has status "Ready":"False"
	I0319 18:26:39.284404  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:26:39.309123  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:26:39.372136  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:26:39.372292  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:26:39.784371  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:26:39.808969  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:26:39.872526  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:26:39.873345  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:26:40.284436  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:26:40.309338  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:26:40.371204  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:26:40.371266  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:26:40.783847  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:26:40.809561  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:26:40.871477  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:26:40.871638  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:26:41.284189  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:26:41.310020  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:26:41.372208  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:26:41.372337  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:26:41.702074  454179 node_ready.go:53] node "addons-039972" has status "Ready":"False"
	I0319 18:26:41.783674  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:26:41.809289  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:26:41.871995  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:26:41.872188  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:26:42.284645  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:26:42.309649  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:26:42.372332  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:26:42.372484  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:26:42.785019  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:26:42.809564  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:26:42.871900  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:26:42.872132  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:26:43.283855  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:26:43.309223  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:26:43.371371  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:26:43.371778  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:26:43.783881  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:26:43.810346  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:26:43.871222  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:26:43.871299  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:26:44.202435  454179 node_ready.go:53] node "addons-039972" has status "Ready":"False"
	I0319 18:26:44.284410  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:26:44.308941  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:26:44.371881  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:26:44.372124  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:26:44.786531  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:26:44.811512  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:26:44.873556  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:26:44.874263  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:26:45.292014  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:26:45.311917  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:26:45.372178  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:26:45.372458  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:26:45.784019  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:26:45.809682  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:26:45.871527  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:26:45.872086  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:26:46.284396  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:26:46.308705  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:26:46.372820  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:26:46.373130  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:26:46.703633  454179 node_ready.go:53] node "addons-039972" has status "Ready":"False"
	I0319 18:26:46.784096  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:26:46.809804  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:26:46.871478  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:26:46.872130  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:26:47.284072  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:26:47.309512  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:26:47.371855  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:26:47.372081  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:26:47.783642  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:26:47.809273  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:26:47.871239  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:26:47.871298  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:26:48.284465  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:26:48.310305  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:26:48.385695  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:26:48.385949  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:26:48.783483  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:26:48.809350  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:26:48.871267  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:26:48.871488  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:26:49.202328  454179 node_ready.go:53] node "addons-039972" has status "Ready":"False"
	I0319 18:26:49.284074  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:26:49.309748  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:26:49.371675  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:26:49.371908  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:26:49.783960  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:26:49.809924  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:26:49.871680  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:26:49.872132  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:26:50.283831  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:26:50.309377  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:26:50.371646  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:26:50.371742  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:26:50.784033  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:26:50.810025  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:26:50.872022  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:26:50.872226  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:26:51.284099  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:26:51.309477  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:26:51.371305  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:26:51.371716  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:26:51.701519  454179 node_ready.go:53] node "addons-039972" has status "Ready":"False"
	I0319 18:26:51.784036  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:26:51.809431  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:26:51.871238  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:26:51.871641  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:26:52.284598  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:26:52.309181  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:26:52.371260  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:26:52.372428  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:26:52.784244  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:26:52.808871  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:26:52.871610  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:26:52.872064  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:26:53.284086  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:26:53.309653  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:26:53.372469  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:26:53.372792  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:26:53.783881  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:26:53.809423  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:26:53.871447  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:26:53.871822  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:26:54.201584  454179 node_ready.go:53] node "addons-039972" has status "Ready":"False"
	I0319 18:26:54.284381  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:26:54.308920  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:26:54.371787  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:26:54.371974  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:26:54.784330  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:26:54.810079  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:26:54.871742  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:26:54.872282  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:26:55.283795  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:26:55.309441  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:26:55.371145  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:26:55.371505  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:26:55.784027  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:26:55.809772  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:26:55.871846  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:26:55.871905  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:26:56.201855  454179 node_ready.go:53] node "addons-039972" has status "Ready":"False"
	I0319 18:26:56.283772  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:26:56.309427  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:26:56.371721  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:26:56.372404  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:26:56.784025  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:26:56.809688  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:26:56.872188  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:26:56.872239  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:26:57.284141  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:26:57.385207  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:26:57.385320  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:26:57.385568  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:26:57.784390  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:26:57.809191  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:26:57.872127  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:26:57.872291  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:26:58.202350  454179 node_ready.go:53] node "addons-039972" has status "Ready":"False"
	I0319 18:26:58.284088  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:26:58.309652  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:26:58.372474  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:26:58.372719  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:26:58.784651  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:26:58.809604  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:26:58.871466  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:26:58.871568  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:26:59.284441  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:26:59.309190  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:26:59.372041  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:26:59.372403  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:26:59.784065  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:26:59.809712  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:26:59.871717  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:26:59.871906  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:00.203357  454179 node_ready.go:53] node "addons-039972" has status "Ready":"False"
	I0319 18:27:00.289265  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:00.311816  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:00.372873  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:00.373163  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:00.784432  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:00.809131  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:00.871218  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:00.871455  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:01.283596  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:01.309293  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:01.371390  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:01.372167  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:01.784539  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:01.809627  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:01.871995  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:01.872196  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:02.284774  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:02.309369  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:02.371631  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:02.371731  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:02.702368  454179 node_ready.go:53] node "addons-039972" has status "Ready":"False"
	I0319 18:27:02.784474  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:02.809032  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:02.870971  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:02.871594  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:03.283660  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:03.309729  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:03.371985  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:03.372531  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:03.784536  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:03.809340  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:03.871493  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:03.871606  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:04.283694  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:04.309451  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:04.371292  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:04.371933  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:04.784181  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:04.808782  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:04.872240  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:04.872487  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:05.202430  454179 node_ready.go:53] node "addons-039972" has status "Ready":"False"
	I0319 18:27:05.284100  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:05.309741  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:05.372173  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:05.372393  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:05.784137  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:05.809861  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:05.871691  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:05.871775  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:06.284319  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:06.309263  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:06.371315  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:06.372300  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:06.783700  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:06.809648  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:06.872244  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:06.872292  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:07.202487  454179 node_ready.go:53] node "addons-039972" has status "Ready":"False"
	I0319 18:27:07.284803  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:07.309610  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:07.374683  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:07.374932  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:07.784012  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:07.809512  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:07.871580  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:07.871662  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:08.283719  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:08.309257  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:08.371346  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:08.371466  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:08.784741  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:08.809973  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:08.871679  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:08.871866  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:09.204220  454179 node_ready.go:49] node "addons-039972" has status "Ready":"True"
	I0319 18:27:09.204295  454179 node_ready.go:38] duration metric: took 43.505468512s for node "addons-039972" to be "Ready" ...
	I0319 18:27:09.204319  454179 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 18:27:09.223866  454179 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-l8gjn" in "kube-system" namespace to be "Ready" ...
	I0319 18:27:09.295430  454179 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0319 18:27:09.295504  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:09.318092  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:09.477912  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:09.478337  454179 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0319 18:27:09.478351  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:09.825491  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:09.826401  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:09.914653  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:09.915125  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:10.284887  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:10.310145  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:10.372728  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:10.372990  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:10.788875  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:10.812217  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:10.876538  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:10.876889  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:11.237655  454179 pod_ready.go:103] pod "coredns-668d6bf9bc-l8gjn" in "kube-system" namespace has status "Ready":"False"
	I0319 18:27:11.285161  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:11.309450  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:11.374555  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:11.375080  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:11.785735  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:11.809435  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:11.871728  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:11.872410  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:12.229819  454179 pod_ready.go:93] pod "coredns-668d6bf9bc-l8gjn" in "kube-system" namespace has status "Ready":"True"
	I0319 18:27:12.229850  454179 pod_ready.go:82] duration metric: took 3.005909211s for pod "coredns-668d6bf9bc-l8gjn" in "kube-system" namespace to be "Ready" ...
	I0319 18:27:12.229869  454179 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-039972" in "kube-system" namespace to be "Ready" ...
	I0319 18:27:12.233932  454179 pod_ready.go:93] pod "etcd-addons-039972" in "kube-system" namespace has status "Ready":"True"
	I0319 18:27:12.233952  454179 pod_ready.go:82] duration metric: took 4.07652ms for pod "etcd-addons-039972" in "kube-system" namespace to be "Ready" ...
	I0319 18:27:12.233967  454179 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-039972" in "kube-system" namespace to be "Ready" ...
	I0319 18:27:12.238447  454179 pod_ready.go:93] pod "kube-apiserver-addons-039972" in "kube-system" namespace has status "Ready":"True"
	I0319 18:27:12.238473  454179 pod_ready.go:82] duration metric: took 4.498528ms for pod "kube-apiserver-addons-039972" in "kube-system" namespace to be "Ready" ...
	I0319 18:27:12.238484  454179 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-039972" in "kube-system" namespace to be "Ready" ...
	I0319 18:27:12.242740  454179 pod_ready.go:93] pod "kube-controller-manager-addons-039972" in "kube-system" namespace has status "Ready":"True"
	I0319 18:27:12.242764  454179 pod_ready.go:82] duration metric: took 4.271679ms for pod "kube-controller-manager-addons-039972" in "kube-system" namespace to be "Ready" ...
	I0319 18:27:12.242778  454179 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-n6nwg" in "kube-system" namespace to be "Ready" ...
	I0319 18:27:12.247338  454179 pod_ready.go:93] pod "kube-proxy-n6nwg" in "kube-system" namespace has status "Ready":"True"
	I0319 18:27:12.247366  454179 pod_ready.go:82] duration metric: took 4.5809ms for pod "kube-proxy-n6nwg" in "kube-system" namespace to be "Ready" ...
	I0319 18:27:12.247378  454179 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-039972" in "kube-system" namespace to be "Ready" ...
	I0319 18:27:12.284582  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:12.309364  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:12.371830  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:12.371959  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:12.629850  454179 pod_ready.go:93] pod "kube-scheduler-addons-039972" in "kube-system" namespace has status "Ready":"True"
	I0319 18:27:12.629875  454179 pod_ready.go:82] duration metric: took 382.488268ms for pod "kube-scheduler-addons-039972" in "kube-system" namespace to be "Ready" ...
	I0319 18:27:12.629887  454179 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-7fbb699795-xtj74" in "kube-system" namespace to be "Ready" ...
	I0319 18:27:12.784521  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:12.808889  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:12.873936  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:12.874570  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:13.284894  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:13.310123  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:13.372344  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:13.372344  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:13.784545  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:13.809591  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:13.873230  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:13.873438  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:14.284869  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:14.309963  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:14.372579  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:14.373032  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:14.635015  454179 pod_ready.go:103] pod "metrics-server-7fbb699795-xtj74" in "kube-system" namespace has status "Ready":"False"
	I0319 18:27:14.784096  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:14.809502  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:14.871955  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:14.872394  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:15.286928  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:15.310122  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:15.372605  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:15.372769  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:15.785078  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:15.810092  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:15.883464  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:15.883670  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:16.284733  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:16.309686  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:16.373778  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:16.374336  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:16.635921  454179 pod_ready.go:103] pod "metrics-server-7fbb699795-xtj74" in "kube-system" namespace has status "Ready":"False"
	I0319 18:27:16.784479  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:16.809387  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:16.873691  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:16.874169  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:17.286343  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:17.309735  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:17.372951  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:17.374300  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:17.828881  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:17.834716  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:17.877601  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:17.889267  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:18.285333  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:18.309171  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:18.378270  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:18.378891  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:18.784515  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:18.813935  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:18.879830  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:18.880358  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:19.136042  454179 pod_ready.go:103] pod "metrics-server-7fbb699795-xtj74" in "kube-system" namespace has status "Ready":"False"
	I0319 18:27:19.297161  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:19.310150  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:19.377729  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:19.378337  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:19.786121  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:19.809267  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:19.871248  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:19.881826  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:20.284886  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:20.309570  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:20.372405  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:20.372645  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:20.786328  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:20.813299  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:20.873127  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:20.873582  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:21.138360  454179 pod_ready.go:103] pod "metrics-server-7fbb699795-xtj74" in "kube-system" namespace has status "Ready":"False"
	I0319 18:27:21.284863  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:21.309964  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:21.373574  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:21.374187  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:21.789325  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:21.810271  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:21.871777  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:21.872391  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:22.288183  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:22.309033  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:22.372527  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:22.373073  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:22.793350  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:22.810894  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:22.878053  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:22.879188  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:23.284391  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:23.309429  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:23.375746  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:23.376757  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:23.636510  454179 pod_ready.go:103] pod "metrics-server-7fbb699795-xtj74" in "kube-system" namespace has status "Ready":"False"
	I0319 18:27:23.791219  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:23.809713  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:23.873074  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:23.873254  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:24.286195  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:24.311660  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:24.372311  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:24.372572  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:24.784127  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:24.809488  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:24.872017  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:24.873138  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:25.285449  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:25.309490  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:25.373708  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:25.374117  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:25.636564  454179 pod_ready.go:103] pod "metrics-server-7fbb699795-xtj74" in "kube-system" namespace has status "Ready":"False"
	I0319 18:27:25.784708  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:25.809173  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:25.878909  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:25.880292  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:26.291362  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:26.309809  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:26.388159  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:26.394356  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:26.784310  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:26.809099  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:26.872526  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:26.872678  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:27.290727  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:27.309195  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:27.372000  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:27.372830  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:27.640032  454179 pod_ready.go:103] pod "metrics-server-7fbb699795-xtj74" in "kube-system" namespace has status "Ready":"False"
	I0319 18:27:27.794165  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:27.811070  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:27.877133  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:27.877360  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:28.284227  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:28.309260  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:28.372198  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:28.372441  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:28.784315  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:28.809698  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:28.873917  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:28.874205  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:29.154253  454179 pod_ready.go:93] pod "metrics-server-7fbb699795-xtj74" in "kube-system" namespace has status "Ready":"True"
	I0319 18:27:29.154327  454179 pod_ready.go:82] duration metric: took 16.524432317s for pod "metrics-server-7fbb699795-xtj74" in "kube-system" namespace to be "Ready" ...
	I0319 18:27:29.154355  454179 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-6qm78" in "kube-system" namespace to be "Ready" ...
	I0319 18:27:29.291927  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:29.390899  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:29.391088  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:29.391921  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:29.784750  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:29.809542  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:29.871992  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:29.872163  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:30.284482  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:30.308948  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:30.372757  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:30.372875  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:30.784368  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:30.809204  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:30.871281  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:30.871909  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:31.168680  454179 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-6qm78" in "kube-system" namespace has status "Ready":"False"
	I0319 18:27:31.284768  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:31.309500  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:31.374529  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:31.375129  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:31.790737  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:31.809717  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:31.873489  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:31.873719  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:32.285091  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:32.309682  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:32.373999  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:32.374517  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:32.795239  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:32.822775  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:32.873660  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:32.875679  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:33.285058  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:33.314363  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:33.372609  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:33.372777  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:33.661052  454179 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-6qm78" in "kube-system" namespace has status "Ready":"False"
	I0319 18:27:33.788885  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:33.816305  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:33.873244  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:33.873588  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:34.285605  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:34.310386  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:34.375904  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:34.376417  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:34.784216  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:34.808900  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:34.874739  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:34.876736  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:35.284770  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:35.310177  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:35.371644  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:35.371775  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:35.662792  454179 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-6qm78" in "kube-system" namespace has status "Ready":"False"
	I0319 18:27:35.802272  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:35.812485  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:35.872796  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:35.872937  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:36.294566  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:36.309117  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:36.372784  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:36.373194  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:36.784439  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:36.810081  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:36.873541  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:36.874369  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:37.289535  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:37.309233  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:37.373421  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:37.373841  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:37.785175  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:37.809392  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:37.871982  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:37.872159  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:38.160752  454179 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-6qm78" in "kube-system" namespace has status "Ready":"False"
	I0319 18:27:38.285406  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:38.312197  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:38.375767  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:38.383980  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:38.791464  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:38.809468  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:38.873215  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:38.874963  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:39.285031  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:39.309608  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:39.373183  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:39.373410  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:39.784522  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:39.809735  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:39.871783  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:39.873745  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:40.161357  454179 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-6qm78" in "kube-system" namespace has status "Ready":"False"
	I0319 18:27:40.284903  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:40.310147  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:40.372862  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:40.373112  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:40.794015  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:40.809936  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:40.873399  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:40.874024  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:41.285015  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:41.386020  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:41.386317  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:41.386974  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:41.663054  454179 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-6qm78" in "kube-system" namespace has status "Ready":"True"
	I0319 18:27:41.663083  454179 pod_ready.go:82] duration metric: took 12.50870405s for pod "nvidia-device-plugin-daemonset-6qm78" in "kube-system" namespace to be "Ready" ...
	I0319 18:27:41.663101  454179 pod_ready.go:39] duration metric: took 32.458755141s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0319 18:27:41.663118  454179 api_server.go:52] waiting for apiserver process to appear ...
	I0319 18:27:41.663160  454179 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 18:27:41.663226  454179 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 18:27:41.706118  454179 cri.go:89] found id: "333cd2a181c964ec63f1aa797727ce5f24bc61bef4c474e14e9b151bf967aad8"
	I0319 18:27:41.706140  454179 cri.go:89] found id: ""
	I0319 18:27:41.706148  454179 logs.go:282] 1 containers: [333cd2a181c964ec63f1aa797727ce5f24bc61bef4c474e14e9b151bf967aad8]
	I0319 18:27:41.706204  454179 ssh_runner.go:195] Run: which crictl
	I0319 18:27:41.711206  454179 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 18:27:41.711308  454179 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 18:27:41.749900  454179 cri.go:89] found id: "41b8d0c3e7f5ecc5a526c4244baad0b485548bf2f4c82c6860959c4074f38e53"
	I0319 18:27:41.749922  454179 cri.go:89] found id: ""
	I0319 18:27:41.749931  454179 logs.go:282] 1 containers: [41b8d0c3e7f5ecc5a526c4244baad0b485548bf2f4c82c6860959c4074f38e53]
	I0319 18:27:41.750009  454179 ssh_runner.go:195] Run: which crictl
	I0319 18:27:41.753561  454179 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 18:27:41.753631  454179 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 18:27:41.784848  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:41.797028  454179 cri.go:89] found id: "7a1f7f73e3286762731132c336a200a5b40143c52a5a03e861237c6154a0ef85"
	I0319 18:27:41.797047  454179 cri.go:89] found id: ""
	I0319 18:27:41.797055  454179 logs.go:282] 1 containers: [7a1f7f73e3286762731132c336a200a5b40143c52a5a03e861237c6154a0ef85]
	I0319 18:27:41.797111  454179 ssh_runner.go:195] Run: which crictl
	I0319 18:27:41.800843  454179 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 18:27:41.800909  454179 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 18:27:41.810610  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:41.840294  454179 cri.go:89] found id: "1242fd605f8f8bda479c006b26e5ef6e64262ef3798382c28854736d0a321e65"
	I0319 18:27:41.840319  454179 cri.go:89] found id: ""
	I0319 18:27:41.840327  454179 logs.go:282] 1 containers: [1242fd605f8f8bda479c006b26e5ef6e64262ef3798382c28854736d0a321e65]
	I0319 18:27:41.840391  454179 ssh_runner.go:195] Run: which crictl
	I0319 18:27:41.843943  454179 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 18:27:41.844016  454179 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 18:27:41.873411  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:41.873531  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:41.884574  454179 cri.go:89] found id: "29952e4bb0baef8b930cb4c006b27c1e679d93f7f97f0aec980b9e7e7cb46a6e"
	I0319 18:27:41.884643  454179 cri.go:89] found id: ""
	I0319 18:27:41.884665  454179 logs.go:282] 1 containers: [29952e4bb0baef8b930cb4c006b27c1e679d93f7f97f0aec980b9e7e7cb46a6e]
	I0319 18:27:41.884755  454179 ssh_runner.go:195] Run: which crictl
	I0319 18:27:41.889912  454179 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 18:27:41.890028  454179 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 18:27:41.930576  454179 cri.go:89] found id: "3c993a11f32ae20f56562d64cf6abd31bd8ec469975f90ea69baf857b1af24e9"
	I0319 18:27:41.930604  454179 cri.go:89] found id: ""
	I0319 18:27:41.930612  454179 logs.go:282] 1 containers: [3c993a11f32ae20f56562d64cf6abd31bd8ec469975f90ea69baf857b1af24e9]
	I0319 18:27:41.930701  454179 ssh_runner.go:195] Run: which crictl
	I0319 18:27:41.934531  454179 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 18:27:41.934630  454179 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 18:27:41.971911  454179 cri.go:89] found id: "837ccba751ee89f344a8bccb40d384a62d148749ec43854911b3b532a3a387ed"
	I0319 18:27:41.971940  454179 cri.go:89] found id: ""
	I0319 18:27:41.971948  454179 logs.go:282] 1 containers: [837ccba751ee89f344a8bccb40d384a62d148749ec43854911b3b532a3a387ed]
	I0319 18:27:41.972015  454179 ssh_runner.go:195] Run: which crictl
	I0319 18:27:41.975538  454179 logs.go:123] Gathering logs for kube-apiserver [333cd2a181c964ec63f1aa797727ce5f24bc61bef4c474e14e9b151bf967aad8] ...
	I0319 18:27:41.975560  454179 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 333cd2a181c964ec63f1aa797727ce5f24bc61bef4c474e14e9b151bf967aad8"
	I0319 18:27:42.059349  454179 logs.go:123] Gathering logs for etcd [41b8d0c3e7f5ecc5a526c4244baad0b485548bf2f4c82c6860959c4074f38e53] ...
	I0319 18:27:42.059388  454179 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 41b8d0c3e7f5ecc5a526c4244baad0b485548bf2f4c82c6860959c4074f38e53"
	I0319 18:27:42.126249  454179 logs.go:123] Gathering logs for kube-proxy [29952e4bb0baef8b930cb4c006b27c1e679d93f7f97f0aec980b9e7e7cb46a6e] ...
	I0319 18:27:42.126294  454179 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 29952e4bb0baef8b930cb4c006b27c1e679d93f7f97f0aec980b9e7e7cb46a6e"
	I0319 18:27:42.196360  454179 logs.go:123] Gathering logs for kubelet ...
	I0319 18:27:42.196392  454179 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0319 18:27:42.267123  454179 logs.go:138] Found kubelet problem: Mar 19 18:26:29 addons-039972 kubelet[1532]: I0319 18:26:29.338370    1532 status_manager.go:890] "Failed to get status for pod" podUID="f59103a1-8105-4513-af28-2efe963fd744" pod="gadget/gadget-tmnzh" err="pods \"gadget-tmnzh\" is forbidden: User \"system:node:addons-039972\" cannot get resource \"pods\" in API group \"\" in the namespace \"gadget\": no relationship found between node 'addons-039972' and this object"
	W0319 18:27:42.287619  454179 logs.go:138] Found kubelet problem: Mar 19 18:27:09 addons-039972 kubelet[1532]: W0319 18:27:09.178387    1532 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-039972" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-039972' and this object
	W0319 18:27:42.287849  454179 logs.go:138] Found kubelet problem: Mar 19 18:27:09 addons-039972 kubelet[1532]: E0319 18:27:09.178438    1532 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-039972\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-039972' and this object" logger="UnhandledError"
	W0319 18:27:42.288035  454179 logs.go:138] Found kubelet problem: Mar 19 18:27:09 addons-039972 kubelet[1532]: W0319 18:27:09.187527    1532 reflector.go:569] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-039972" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-039972' and this object
	W0319 18:27:42.288259  454179 logs.go:138] Found kubelet problem: Mar 19 18:27:09 addons-039972 kubelet[1532]: E0319 18:27:09.187575    1532 reflector.go:166] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-039972\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-039972' and this object" logger="UnhandledError"
	W0319 18:27:42.288438  454179 logs.go:138] Found kubelet problem: Mar 19 18:27:09 addons-039972 kubelet[1532]: W0319 18:27:09.190060    1532 reflector.go:569] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-039972" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-039972' and this object
	W0319 18:27:42.288670  454179 logs.go:138] Found kubelet problem: Mar 19 18:27:09 addons-039972 kubelet[1532]: E0319 18:27:09.190098    1532 reflector.go:166] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-039972\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-039972' and this object" logger="UnhandledError"
	W0319 18:27:42.288830  454179 logs.go:138] Found kubelet problem: Mar 19 18:27:09 addons-039972 kubelet[1532]: W0319 18:27:09.190177    1532 reflector.go:569] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-039972" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-039972' and this object
	W0319 18:27:42.289034  454179 logs.go:138] Found kubelet problem: Mar 19 18:27:09 addons-039972 kubelet[1532]: E0319 18:27:09.190194    1532 reflector.go:166] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-039972\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-039972' and this object" logger="UnhandledError"
	W0319 18:27:42.289276  454179 logs.go:138] Found kubelet problem: Mar 19 18:27:09 addons-039972 kubelet[1532]: I0319 18:27:09.206710    1532 status_manager.go:890] "Failed to get status for pod" podUID="d0d6ff6b-9168-433a-9030-05db17ee8d50" pod="local-path-storage/local-path-provisioner-76f89f99b5-ldqjc" err="pods \"local-path-provisioner-76f89f99b5-ldqjc\" is forbidden: User \"system:node:addons-039972\" cannot get resource \"pods\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-039972' and this object"
	W0319 18:27:42.289461  454179 logs.go:138] Found kubelet problem: Mar 19 18:27:09 addons-039972 kubelet[1532]: W0319 18:27:09.206933    1532 reflector.go:569] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-039972" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-039972' and this object
	W0319 18:27:42.289682  454179 logs.go:138] Found kubelet problem: Mar 19 18:27:09 addons-039972 kubelet[1532]: E0319 18:27:09.206969    1532 reflector.go:166] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-039972\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-039972' and this object" logger="UnhandledError"
	W0319 18:27:42.289869  454179 logs.go:138] Found kubelet problem: Mar 19 18:27:09 addons-039972 kubelet[1532]: W0319 18:27:09.207025    1532 reflector.go:569] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-039972" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-039972' and this object
	W0319 18:27:42.290083  454179 logs.go:138] Found kubelet problem: Mar 19 18:27:09 addons-039972 kubelet[1532]: E0319 18:27:09.207043    1532 reflector.go:166] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-039972\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-039972' and this object" logger="UnhandledError"
	W0319 18:27:42.290269  454179 logs.go:138] Found kubelet problem: Mar 19 18:27:09 addons-039972 kubelet[1532]: W0319 18:27:09.207085    1532 reflector.go:569] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-039972" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-039972' and this object
	W0319 18:27:42.290497  454179 logs.go:138] Found kubelet problem: Mar 19 18:27:09 addons-039972 kubelet[1532]: E0319 18:27:09.207106    1532 reflector.go:166] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-039972\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-039972' and this object" logger="UnhandledError"
	W0319 18:27:42.290733  454179 logs.go:138] Found kubelet problem: Mar 19 18:27:09 addons-039972 kubelet[1532]: W0319 18:27:09.207145    1532 reflector.go:569] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-039972" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-039972' and this object
	W0319 18:27:42.290966  454179 logs.go:138] Found kubelet problem: Mar 19 18:27:09 addons-039972 kubelet[1532]: E0319 18:27:09.207159    1532 reflector.go:166] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-039972\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-039972' and this object" logger="UnhandledError"
	I0319 18:27:42.298395  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:42.309725  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:42.324319  454179 logs.go:123] Gathering logs for describe nodes ...
	I0319 18:27:42.324360  454179 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0319 18:27:42.374596  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:42.375796  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:42.567671  454179 logs.go:123] Gathering logs for coredns [7a1f7f73e3286762731132c336a200a5b40143c52a5a03e861237c6154a0ef85] ...
	I0319 18:27:42.567744  454179 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a1f7f73e3286762731132c336a200a5b40143c52a5a03e861237c6154a0ef85"
	I0319 18:27:42.617381  454179 logs.go:123] Gathering logs for kube-scheduler [1242fd605f8f8bda479c006b26e5ef6e64262ef3798382c28854736d0a321e65] ...
	I0319 18:27:42.617408  454179 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1242fd605f8f8bda479c006b26e5ef6e64262ef3798382c28854736d0a321e65"
	I0319 18:27:42.675792  454179 logs.go:123] Gathering logs for kube-controller-manager [3c993a11f32ae20f56562d64cf6abd31bd8ec469975f90ea69baf857b1af24e9] ...
	I0319 18:27:42.675868  454179 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c993a11f32ae20f56562d64cf6abd31bd8ec469975f90ea69baf857b1af24e9"
	I0319 18:27:42.780991  454179 logs.go:123] Gathering logs for kindnet [837ccba751ee89f344a8bccb40d384a62d148749ec43854911b3b532a3a387ed] ...
	I0319 18:27:42.781079  454179 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 837ccba751ee89f344a8bccb40d384a62d148749ec43854911b3b532a3a387ed"
	I0319 18:27:42.788406  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:42.809106  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:42.854182  454179 logs.go:123] Gathering logs for CRI-O ...
	I0319 18:27:42.854264  454179 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 18:27:42.872549  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:42.872885  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:42.987557  454179 logs.go:123] Gathering logs for container status ...
	I0319 18:27:42.987601  454179 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 18:27:43.068029  454179 logs.go:123] Gathering logs for dmesg ...
	I0319 18:27:43.068114  454179 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 18:27:43.086260  454179 out.go:358] Setting ErrFile to fd 2...
	I0319 18:27:43.086332  454179 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0319 18:27:43.086409  454179 out.go:270] X Problems detected in kubelet:
	W0319 18:27:43.086451  454179 out.go:270]   Mar 19 18:27:09 addons-039972 kubelet[1532]: E0319 18:27:09.207043    1532 reflector.go:166] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-039972\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-039972' and this object" logger="UnhandledError"
	W0319 18:27:43.086497  454179 out.go:270]   Mar 19 18:27:09 addons-039972 kubelet[1532]: W0319 18:27:09.207085    1532 reflector.go:569] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-039972" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-039972' and this object
	W0319 18:27:43.086535  454179 out.go:270]   Mar 19 18:27:09 addons-039972 kubelet[1532]: E0319 18:27:09.207106    1532 reflector.go:166] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-039972\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-039972' and this object" logger="UnhandledError"
	W0319 18:27:43.086656  454179 out.go:270]   Mar 19 18:27:09 addons-039972 kubelet[1532]: W0319 18:27:09.207145    1532 reflector.go:569] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-039972" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-039972' and this object
	W0319 18:27:43.086692  454179 out.go:270]   Mar 19 18:27:09 addons-039972 kubelet[1532]: E0319 18:27:09.207159    1532 reflector.go:166] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-039972\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-039972' and this object" logger="UnhandledError"
	I0319 18:27:43.086736  454179 out.go:358] Setting ErrFile to fd 2...
	I0319 18:27:43.086762  454179 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0319 18:27:43.284759  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:43.316055  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:43.373293  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:43.373342  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:43.784614  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:43.808900  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:43.872008  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:43.872728  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:44.284536  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:44.309902  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:44.371664  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:44.372345  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:44.784432  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:44.809083  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:44.873029  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:44.873261  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:45.289511  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:45.310511  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:45.373954  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:45.374257  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:45.787714  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:45.813240  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:45.873179  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:45.873447  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:46.284829  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:46.309626  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:46.385460  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:46.385701  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:46.783684  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:46.809180  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:46.872207  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:46.872842  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:47.284536  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:47.309188  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:47.372188  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:47.372297  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:47.788815  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:47.811051  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:47.878537  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:47.880469  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:48.292607  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:48.325431  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:48.395834  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:48.396400  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:48.792219  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:48.810164  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:48.872728  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:48.873121  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:49.284422  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:49.309941  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:49.373406  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:49.374641  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:49.784400  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:49.809406  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:49.872514  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:49.872560  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:50.284997  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:50.309382  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:50.385970  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:50.386610  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0319 18:27:50.784976  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:50.813743  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:50.873001  454179 kapi.go:107] duration metric: took 1m21.004907973s to wait for kubernetes.io/minikube-addons=registry ...
	I0319 18:27:50.873225  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:51.284005  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:51.309589  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:51.371393  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:51.784715  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:51.809426  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:51.871714  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:52.288994  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:52.309749  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:52.372929  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:52.794645  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:52.809729  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:52.871733  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:53.088174  454179 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 18:27:53.103972  454179 api_server.go:72] duration metric: took 1m30.222504755s to wait for apiserver process to appear ...
	I0319 18:27:53.104031  454179 api_server.go:88] waiting for apiserver healthz status ...
	I0319 18:27:53.104063  454179 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 18:27:53.104122  454179 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 18:27:53.149930  454179 cri.go:89] found id: "333cd2a181c964ec63f1aa797727ce5f24bc61bef4c474e14e9b151bf967aad8"
	I0319 18:27:53.149952  454179 cri.go:89] found id: ""
	I0319 18:27:53.149960  454179 logs.go:282] 1 containers: [333cd2a181c964ec63f1aa797727ce5f24bc61bef4c474e14e9b151bf967aad8]
	I0319 18:27:53.150017  454179 ssh_runner.go:195] Run: which crictl
	I0319 18:27:53.154209  454179 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 18:27:53.154289  454179 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 18:27:53.204953  454179 cri.go:89] found id: "41b8d0c3e7f5ecc5a526c4244baad0b485548bf2f4c82c6860959c4074f38e53"
	I0319 18:27:53.204976  454179 cri.go:89] found id: ""
	I0319 18:27:53.204984  454179 logs.go:282] 1 containers: [41b8d0c3e7f5ecc5a526c4244baad0b485548bf2f4c82c6860959c4074f38e53]
	I0319 18:27:53.205039  454179 ssh_runner.go:195] Run: which crictl
	I0319 18:27:53.208803  454179 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 18:27:53.208974  454179 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 18:27:53.275024  454179 cri.go:89] found id: "7a1f7f73e3286762731132c336a200a5b40143c52a5a03e861237c6154a0ef85"
	I0319 18:27:53.275047  454179 cri.go:89] found id: ""
	I0319 18:27:53.275055  454179 logs.go:282] 1 containers: [7a1f7f73e3286762731132c336a200a5b40143c52a5a03e861237c6154a0ef85]
	I0319 18:27:53.275112  454179 ssh_runner.go:195] Run: which crictl
	I0319 18:27:53.278802  454179 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 18:27:53.278873  454179 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 18:27:53.283838  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:53.310116  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:53.336731  454179 cri.go:89] found id: "1242fd605f8f8bda479c006b26e5ef6e64262ef3798382c28854736d0a321e65"
	I0319 18:27:53.336762  454179 cri.go:89] found id: ""
	I0319 18:27:53.336772  454179 logs.go:282] 1 containers: [1242fd605f8f8bda479c006b26e5ef6e64262ef3798382c28854736d0a321e65]
	I0319 18:27:53.336829  454179 ssh_runner.go:195] Run: which crictl
	I0319 18:27:53.340502  454179 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 18:27:53.340575  454179 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 18:27:53.371991  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:53.385137  454179 cri.go:89] found id: "29952e4bb0baef8b930cb4c006b27c1e679d93f7f97f0aec980b9e7e7cb46a6e"
	I0319 18:27:53.385160  454179 cri.go:89] found id: ""
	I0319 18:27:53.385169  454179 logs.go:282] 1 containers: [29952e4bb0baef8b930cb4c006b27c1e679d93f7f97f0aec980b9e7e7cb46a6e]
	I0319 18:27:53.385225  454179 ssh_runner.go:195] Run: which crictl
	I0319 18:27:53.393420  454179 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 18:27:53.393497  454179 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 18:27:53.452012  454179 cri.go:89] found id: "3c993a11f32ae20f56562d64cf6abd31bd8ec469975f90ea69baf857b1af24e9"
	I0319 18:27:53.452036  454179 cri.go:89] found id: ""
	I0319 18:27:53.452045  454179 logs.go:282] 1 containers: [3c993a11f32ae20f56562d64cf6abd31bd8ec469975f90ea69baf857b1af24e9]
	I0319 18:27:53.452100  454179 ssh_runner.go:195] Run: which crictl
	I0319 18:27:53.456874  454179 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 18:27:53.456950  454179 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 18:27:53.533412  454179 cri.go:89] found id: "837ccba751ee89f344a8bccb40d384a62d148749ec43854911b3b532a3a387ed"
	I0319 18:27:53.533434  454179 cri.go:89] found id: ""
	I0319 18:27:53.533442  454179 logs.go:282] 1 containers: [837ccba751ee89f344a8bccb40d384a62d148749ec43854911b3b532a3a387ed]
	I0319 18:27:53.533498  454179 ssh_runner.go:195] Run: which crictl
	I0319 18:27:53.537140  454179 logs.go:123] Gathering logs for etcd [41b8d0c3e7f5ecc5a526c4244baad0b485548bf2f4c82c6860959c4074f38e53] ...
	I0319 18:27:53.537162  454179 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 41b8d0c3e7f5ecc5a526c4244baad0b485548bf2f4c82c6860959c4074f38e53"
	I0319 18:27:53.604299  454179 logs.go:123] Gathering logs for kube-scheduler [1242fd605f8f8bda479c006b26e5ef6e64262ef3798382c28854736d0a321e65] ...
	I0319 18:27:53.604333  454179 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1242fd605f8f8bda479c006b26e5ef6e64262ef3798382c28854736d0a321e65"
	I0319 18:27:53.681025  454179 logs.go:123] Gathering logs for kindnet [837ccba751ee89f344a8bccb40d384a62d148749ec43854911b3b532a3a387ed] ...
	I0319 18:27:53.681055  454179 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 837ccba751ee89f344a8bccb40d384a62d148749ec43854911b3b532a3a387ed"
	I0319 18:27:53.733396  454179 logs.go:123] Gathering logs for container status ...
	I0319 18:27:53.733423  454179 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 18:27:53.789940  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:53.814480  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:53.835123  454179 logs.go:123] Gathering logs for describe nodes ...
	I0319 18:27:53.835206  454179 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0319 18:27:53.871247  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:54.049015  454179 logs.go:123] Gathering logs for coredns [7a1f7f73e3286762731132c336a200a5b40143c52a5a03e861237c6154a0ef85] ...
	I0319 18:27:54.049126  454179 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a1f7f73e3286762731132c336a200a5b40143c52a5a03e861237c6154a0ef85"
	I0319 18:27:54.121164  454179 logs.go:123] Gathering logs for kube-proxy [29952e4bb0baef8b930cb4c006b27c1e679d93f7f97f0aec980b9e7e7cb46a6e] ...
	I0319 18:27:54.121337  454179 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 29952e4bb0baef8b930cb4c006b27c1e679d93f7f97f0aec980b9e7e7cb46a6e"
	I0319 18:27:54.202459  454179 logs.go:123] Gathering logs for kube-controller-manager [3c993a11f32ae20f56562d64cf6abd31bd8ec469975f90ea69baf857b1af24e9] ...
	I0319 18:27:54.202527  454179 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c993a11f32ae20f56562d64cf6abd31bd8ec469975f90ea69baf857b1af24e9"
	I0319 18:27:54.298887  454179 logs.go:123] Gathering logs for CRI-O ...
	I0319 18:27:54.298959  454179 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 18:27:54.306841  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:54.313256  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:54.386409  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:54.446010  454179 logs.go:123] Gathering logs for kubelet ...
	I0319 18:27:54.446083  454179 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0319 18:27:54.531785  454179 logs.go:138] Found kubelet problem: Mar 19 18:26:29 addons-039972 kubelet[1532]: I0319 18:26:29.338370    1532 status_manager.go:890] "Failed to get status for pod" podUID="f59103a1-8105-4513-af28-2efe963fd744" pod="gadget/gadget-tmnzh" err="pods \"gadget-tmnzh\" is forbidden: User \"system:node:addons-039972\" cannot get resource \"pods\" in API group \"\" in the namespace \"gadget\": no relationship found between node 'addons-039972' and this object"
	W0319 18:27:54.554736  454179 logs.go:138] Found kubelet problem: Mar 19 18:27:09 addons-039972 kubelet[1532]: W0319 18:27:09.178387    1532 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-039972" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-039972' and this object
	W0319 18:27:54.555027  454179 logs.go:138] Found kubelet problem: Mar 19 18:27:09 addons-039972 kubelet[1532]: E0319 18:27:09.178438    1532 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-039972\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-039972' and this object" logger="UnhandledError"
	W0319 18:27:54.555238  454179 logs.go:138] Found kubelet problem: Mar 19 18:27:09 addons-039972 kubelet[1532]: W0319 18:27:09.187527    1532 reflector.go:569] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-039972" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-039972' and this object
	W0319 18:27:54.555487  454179 logs.go:138] Found kubelet problem: Mar 19 18:27:09 addons-039972 kubelet[1532]: E0319 18:27:09.187575    1532 reflector.go:166] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-039972\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-039972' and this object" logger="UnhandledError"
	W0319 18:27:54.555700  454179 logs.go:138] Found kubelet problem: Mar 19 18:27:09 addons-039972 kubelet[1532]: W0319 18:27:09.190060    1532 reflector.go:569] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-039972" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-039972' and this object
	W0319 18:27:54.555941  454179 logs.go:138] Found kubelet problem: Mar 19 18:27:09 addons-039972 kubelet[1532]: E0319 18:27:09.190098    1532 reflector.go:166] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-039972\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-039972' and this object" logger="UnhandledError"
	W0319 18:27:54.556128  454179 logs.go:138] Found kubelet problem: Mar 19 18:27:09 addons-039972 kubelet[1532]: W0319 18:27:09.190177    1532 reflector.go:569] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-039972" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-039972' and this object
	W0319 18:27:54.556353  454179 logs.go:138] Found kubelet problem: Mar 19 18:27:09 addons-039972 kubelet[1532]: E0319 18:27:09.190194    1532 reflector.go:166] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-039972\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-039972' and this object" logger="UnhandledError"
	W0319 18:27:54.556623  454179 logs.go:138] Found kubelet problem: Mar 19 18:27:09 addons-039972 kubelet[1532]: I0319 18:27:09.206710    1532 status_manager.go:890] "Failed to get status for pod" podUID="d0d6ff6b-9168-433a-9030-05db17ee8d50" pod="local-path-storage/local-path-provisioner-76f89f99b5-ldqjc" err="pods \"local-path-provisioner-76f89f99b5-ldqjc\" is forbidden: User \"system:node:addons-039972\" cannot get resource \"pods\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-039972' and this object"
	W0319 18:27:54.556828  454179 logs.go:138] Found kubelet problem: Mar 19 18:27:09 addons-039972 kubelet[1532]: W0319 18:27:09.206933    1532 reflector.go:569] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-039972" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-039972' and this object
	W0319 18:27:54.557071  454179 logs.go:138] Found kubelet problem: Mar 19 18:27:09 addons-039972 kubelet[1532]: E0319 18:27:09.206969    1532 reflector.go:166] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-039972\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-039972' and this object" logger="UnhandledError"
	W0319 18:27:54.557267  454179 logs.go:138] Found kubelet problem: Mar 19 18:27:09 addons-039972 kubelet[1532]: W0319 18:27:09.207025    1532 reflector.go:569] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-039972" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-039972' and this object
	W0319 18:27:54.557499  454179 logs.go:138] Found kubelet problem: Mar 19 18:27:09 addons-039972 kubelet[1532]: E0319 18:27:09.207043    1532 reflector.go:166] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-039972\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-039972' and this object" logger="UnhandledError"
	W0319 18:27:54.557708  454179 logs.go:138] Found kubelet problem: Mar 19 18:27:09 addons-039972 kubelet[1532]: W0319 18:27:09.207085    1532 reflector.go:569] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-039972" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-039972' and this object
	W0319 18:27:54.558031  454179 logs.go:138] Found kubelet problem: Mar 19 18:27:09 addons-039972 kubelet[1532]: E0319 18:27:09.207106    1532 reflector.go:166] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-039972\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-039972' and this object" logger="UnhandledError"
	W0319 18:27:54.558247  454179 logs.go:138] Found kubelet problem: Mar 19 18:27:09 addons-039972 kubelet[1532]: W0319 18:27:09.207145    1532 reflector.go:569] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-039972" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-039972' and this object
	W0319 18:27:54.558732  454179 logs.go:138] Found kubelet problem: Mar 19 18:27:09 addons-039972 kubelet[1532]: E0319 18:27:09.207159    1532 reflector.go:166] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-039972\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-039972' and this object" logger="UnhandledError"
	I0319 18:27:54.593930  454179 logs.go:123] Gathering logs for dmesg ...
	I0319 18:27:54.594007  454179 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 18:27:54.610979  454179 logs.go:123] Gathering logs for kube-apiserver [333cd2a181c964ec63f1aa797727ce5f24bc61bef4c474e14e9b151bf967aad8] ...
	I0319 18:27:54.611005  454179 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 333cd2a181c964ec63f1aa797727ce5f24bc61bef4c474e14e9b151bf967aad8"
	I0319 18:27:54.681715  454179 out.go:358] Setting ErrFile to fd 2...
	I0319 18:27:54.681932  454179 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0319 18:27:54.682015  454179 out.go:270] X Problems detected in kubelet:
	W0319 18:27:54.682164  454179 out.go:270]   Mar 19 18:27:09 addons-039972 kubelet[1532]: E0319 18:27:09.207043    1532 reflector.go:166] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-039972\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-039972' and this object" logger="UnhandledError"
	W0319 18:27:54.682199  454179 out.go:270]   Mar 19 18:27:09 addons-039972 kubelet[1532]: W0319 18:27:09.207085    1532 reflector.go:569] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-039972" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-039972' and this object
	W0319 18:27:54.682242  454179 out.go:270]   Mar 19 18:27:09 addons-039972 kubelet[1532]: E0319 18:27:09.207106    1532 reflector.go:166] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-039972\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-039972' and this object" logger="UnhandledError"
	W0319 18:27:54.682282  454179 out.go:270]   Mar 19 18:27:09 addons-039972 kubelet[1532]: W0319 18:27:09.207145    1532 reflector.go:569] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-039972" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-039972' and this object
	W0319 18:27:54.682312  454179 out.go:270]   Mar 19 18:27:09 addons-039972 kubelet[1532]: E0319 18:27:09.207159    1532 reflector.go:166] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-039972\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-039972' and this object" logger="UnhandledError"
	I0319 18:27:54.682351  454179 out.go:358] Setting ErrFile to fd 2...
	I0319 18:27:54.682377  454179 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0319 18:27:54.789585  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:54.809217  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:54.872431  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:55.284718  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:55.309559  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:55.371551  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:55.784697  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:55.809454  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:55.871250  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:56.284844  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:56.309328  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:56.370992  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:56.785557  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:56.818933  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:56.891085  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:57.290427  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:57.310055  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:57.371121  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:57.812859  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:57.822693  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:57.872669  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:58.293267  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:58.309067  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:58.372250  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:58.787575  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:58.809905  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:58.872240  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:59.285374  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:59.310471  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:59.372015  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:27:59.785120  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:27:59.811949  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:27:59.879111  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:28:00.287452  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:28:00.309714  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:28:00.372205  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:28:00.785468  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:28:00.809084  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:28:00.874282  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:28:01.285427  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:28:01.310279  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:28:01.379645  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:28:01.786544  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:28:01.809072  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:28:01.872605  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:28:02.285609  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:28:02.311099  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:28:02.371624  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:28:02.785353  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:28:02.810667  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:28:02.871872  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:28:03.285115  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:28:03.318332  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:28:03.371506  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:28:03.785890  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:28:03.809916  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:28:03.872231  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:28:04.285903  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:28:04.310127  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:28:04.371864  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:28:04.683375  454179 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0319 18:28:04.694629  454179 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0319 18:28:04.695676  454179 api_server.go:141] control plane version: v1.32.2
	I0319 18:28:04.695703  454179 api_server.go:131] duration metric: took 11.591663929s to wait for apiserver health ...
	I0319 18:28:04.695712  454179 system_pods.go:43] waiting for kube-system pods to appear ...
	I0319 18:28:04.695732  454179 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0319 18:28:04.695795  454179 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0319 18:28:04.752424  454179 cri.go:89] found id: "333cd2a181c964ec63f1aa797727ce5f24bc61bef4c474e14e9b151bf967aad8"
	I0319 18:28:04.752496  454179 cri.go:89] found id: ""
	I0319 18:28:04.752517  454179 logs.go:282] 1 containers: [333cd2a181c964ec63f1aa797727ce5f24bc61bef4c474e14e9b151bf967aad8]
	I0319 18:28:04.752604  454179 ssh_runner.go:195] Run: which crictl
	I0319 18:28:04.756315  454179 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0319 18:28:04.756428  454179 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0319 18:28:04.795326  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:28:04.806500  454179 cri.go:89] found id: "41b8d0c3e7f5ecc5a526c4244baad0b485548bf2f4c82c6860959c4074f38e53"
	I0319 18:28:04.806582  454179 cri.go:89] found id: ""
	I0319 18:28:04.806605  454179 logs.go:282] 1 containers: [41b8d0c3e7f5ecc5a526c4244baad0b485548bf2f4c82c6860959c4074f38e53]
	I0319 18:28:04.806698  454179 ssh_runner.go:195] Run: which crictl
	I0319 18:28:04.810604  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:28:04.811352  454179 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0319 18:28:04.811428  454179 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0319 18:28:04.854295  454179 cri.go:89] found id: "7a1f7f73e3286762731132c336a200a5b40143c52a5a03e861237c6154a0ef85"
	I0319 18:28:04.854319  454179 cri.go:89] found id: ""
	I0319 18:28:04.854327  454179 logs.go:282] 1 containers: [7a1f7f73e3286762731132c336a200a5b40143c52a5a03e861237c6154a0ef85]
	I0319 18:28:04.854393  454179 ssh_runner.go:195] Run: which crictl
	I0319 18:28:04.858754  454179 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0319 18:28:04.858870  454179 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0319 18:28:04.872788  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:28:04.923263  454179 cri.go:89] found id: "1242fd605f8f8bda479c006b26e5ef6e64262ef3798382c28854736d0a321e65"
	I0319 18:28:04.923324  454179 cri.go:89] found id: ""
	I0319 18:28:04.923354  454179 logs.go:282] 1 containers: [1242fd605f8f8bda479c006b26e5ef6e64262ef3798382c28854736d0a321e65]
	I0319 18:28:04.923451  454179 ssh_runner.go:195] Run: which crictl
	I0319 18:28:04.928289  454179 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0319 18:28:04.928412  454179 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0319 18:28:04.992624  454179 cri.go:89] found id: "29952e4bb0baef8b930cb4c006b27c1e679d93f7f97f0aec980b9e7e7cb46a6e"
	I0319 18:28:04.992693  454179 cri.go:89] found id: ""
	I0319 18:28:04.992714  454179 logs.go:282] 1 containers: [29952e4bb0baef8b930cb4c006b27c1e679d93f7f97f0aec980b9e7e7cb46a6e]
	I0319 18:28:04.992799  454179 ssh_runner.go:195] Run: which crictl
	I0319 18:28:04.997412  454179 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0319 18:28:04.997531  454179 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0319 18:28:05.054764  454179 cri.go:89] found id: "3c993a11f32ae20f56562d64cf6abd31bd8ec469975f90ea69baf857b1af24e9"
	I0319 18:28:05.054835  454179 cri.go:89] found id: ""
	I0319 18:28:05.054855  454179 logs.go:282] 1 containers: [3c993a11f32ae20f56562d64cf6abd31bd8ec469975f90ea69baf857b1af24e9]
	I0319 18:28:05.054941  454179 ssh_runner.go:195] Run: which crictl
	I0319 18:28:05.059628  454179 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0319 18:28:05.059770  454179 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0319 18:28:05.115450  454179 cri.go:89] found id: "837ccba751ee89f344a8bccb40d384a62d148749ec43854911b3b532a3a387ed"
	I0319 18:28:05.115520  454179 cri.go:89] found id: ""
	I0319 18:28:05.115542  454179 logs.go:282] 1 containers: [837ccba751ee89f344a8bccb40d384a62d148749ec43854911b3b532a3a387ed]
	I0319 18:28:05.115636  454179 ssh_runner.go:195] Run: which crictl
	I0319 18:28:05.119936  454179 logs.go:123] Gathering logs for kubelet ...
	I0319 18:28:05.120009  454179 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0319 18:28:05.184243  454179 logs.go:138] Found kubelet problem: Mar 19 18:26:29 addons-039972 kubelet[1532]: I0319 18:26:29.338370    1532 status_manager.go:890] "Failed to get status for pod" podUID="f59103a1-8105-4513-af28-2efe963fd744" pod="gadget/gadget-tmnzh" err="pods \"gadget-tmnzh\" is forbidden: User \"system:node:addons-039972\" cannot get resource \"pods\" in API group \"\" in the namespace \"gadget\": no relationship found between node 'addons-039972' and this object"
	W0319 18:28:05.204867  454179 logs.go:138] Found kubelet problem: Mar 19 18:27:09 addons-039972 kubelet[1532]: W0319 18:27:09.178387    1532 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-039972" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-039972' and this object
	W0319 18:28:05.205159  454179 logs.go:138] Found kubelet problem: Mar 19 18:27:09 addons-039972 kubelet[1532]: E0319 18:27:09.178438    1532 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-039972\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-039972' and this object" logger="UnhandledError"
	W0319 18:28:05.205670  454179 logs.go:138] Found kubelet problem: Mar 19 18:27:09 addons-039972 kubelet[1532]: W0319 18:27:09.187527    1532 reflector.go:569] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-039972" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-039972' and this object
	W0319 18:28:05.205978  454179 logs.go:138] Found kubelet problem: Mar 19 18:27:09 addons-039972 kubelet[1532]: E0319 18:27:09.187575    1532 reflector.go:166] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-039972\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-039972' and this object" logger="UnhandledError"
	W0319 18:28:05.206193  454179 logs.go:138] Found kubelet problem: Mar 19 18:27:09 addons-039972 kubelet[1532]: W0319 18:27:09.190060    1532 reflector.go:569] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-039972" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-039972' and this object
	W0319 18:28:05.206437  454179 logs.go:138] Found kubelet problem: Mar 19 18:27:09 addons-039972 kubelet[1532]: E0319 18:27:09.190098    1532 reflector.go:166] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-039972\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-039972' and this object" logger="UnhandledError"
	W0319 18:28:05.206634  454179 logs.go:138] Found kubelet problem: Mar 19 18:27:09 addons-039972 kubelet[1532]: W0319 18:27:09.190177    1532 reflector.go:569] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-039972" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-039972' and this object
	W0319 18:28:05.206864  454179 logs.go:138] Found kubelet problem: Mar 19 18:27:09 addons-039972 kubelet[1532]: E0319 18:27:09.190194    1532 reflector.go:166] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-039972\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-039972' and this object" logger="UnhandledError"
	W0319 18:28:05.207137  454179 logs.go:138] Found kubelet problem: Mar 19 18:27:09 addons-039972 kubelet[1532]: I0319 18:27:09.206710    1532 status_manager.go:890] "Failed to get status for pod" podUID="d0d6ff6b-9168-433a-9030-05db17ee8d50" pod="local-path-storage/local-path-provisioner-76f89f99b5-ldqjc" err="pods \"local-path-provisioner-76f89f99b5-ldqjc\" is forbidden: User \"system:node:addons-039972\" cannot get resource \"pods\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-039972' and this object"
	W0319 18:28:05.207340  454179 logs.go:138] Found kubelet problem: Mar 19 18:27:09 addons-039972 kubelet[1532]: W0319 18:27:09.206933    1532 reflector.go:569] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-039972" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-039972' and this object
	W0319 18:28:05.207582  454179 logs.go:138] Found kubelet problem: Mar 19 18:27:09 addons-039972 kubelet[1532]: E0319 18:27:09.206969    1532 reflector.go:166] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-039972\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-039972' and this object" logger="UnhandledError"
	W0319 18:28:05.207778  454179 logs.go:138] Found kubelet problem: Mar 19 18:27:09 addons-039972 kubelet[1532]: W0319 18:27:09.207025    1532 reflector.go:569] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-039972" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-039972' and this object
	W0319 18:28:05.208018  454179 logs.go:138] Found kubelet problem: Mar 19 18:27:09 addons-039972 kubelet[1532]: E0319 18:27:09.207043    1532 reflector.go:166] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-039972\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-039972' and this object" logger="UnhandledError"
	W0319 18:28:05.208227  454179 logs.go:138] Found kubelet problem: Mar 19 18:27:09 addons-039972 kubelet[1532]: W0319 18:27:09.207085    1532 reflector.go:569] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-039972" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-039972' and this object
	W0319 18:28:05.208480  454179 logs.go:138] Found kubelet problem: Mar 19 18:27:09 addons-039972 kubelet[1532]: E0319 18:27:09.207106    1532 reflector.go:166] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-039972\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-039972' and this object" logger="UnhandledError"
	W0319 18:28:05.208746  454179 logs.go:138] Found kubelet problem: Mar 19 18:27:09 addons-039972 kubelet[1532]: W0319 18:27:09.207145    1532 reflector.go:569] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-039972" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-039972' and this object
	W0319 18:28:05.209006  454179 logs.go:138] Found kubelet problem: Mar 19 18:27:09 addons-039972 kubelet[1532]: E0319 18:27:09.207159    1532 reflector.go:166] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-039972\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-039972' and this object" logger="UnhandledError"
	I0319 18:28:05.248907  454179 logs.go:123] Gathering logs for dmesg ...
	I0319 18:28:05.249001  454179 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0319 18:28:05.285264  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:28:05.287928  454179 logs.go:123] Gathering logs for etcd [41b8d0c3e7f5ecc5a526c4244baad0b485548bf2f4c82c6860959c4074f38e53] ...
	I0319 18:28:05.287957  454179 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 41b8d0c3e7f5ecc5a526c4244baad0b485548bf2f4c82c6860959c4074f38e53"
	I0319 18:28:05.309748  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:28:05.376817  454179 logs.go:123] Gathering logs for kube-controller-manager [3c993a11f32ae20f56562d64cf6abd31bd8ec469975f90ea69baf857b1af24e9] ...
	I0319 18:28:05.376856  454179 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c993a11f32ae20f56562d64cf6abd31bd8ec469975f90ea69baf857b1af24e9"
	I0319 18:28:05.377421  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:28:05.495091  454179 logs.go:123] Gathering logs for container status ...
	I0319 18:28:05.495189  454179 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0319 18:28:05.561044  454179 logs.go:123] Gathering logs for describe nodes ...
	I0319 18:28:05.561128  454179 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0319 18:28:05.692231  454179 logs.go:123] Gathering logs for kube-apiserver [333cd2a181c964ec63f1aa797727ce5f24bc61bef4c474e14e9b151bf967aad8] ...
	I0319 18:28:05.692308  454179 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 333cd2a181c964ec63f1aa797727ce5f24bc61bef4c474e14e9b151bf967aad8"
	I0319 18:28:05.785674  454179 logs.go:123] Gathering logs for coredns [7a1f7f73e3286762731132c336a200a5b40143c52a5a03e861237c6154a0ef85] ...
	I0319 18:28:05.785707  454179 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a1f7f73e3286762731132c336a200a5b40143c52a5a03e861237c6154a0ef85"
	I0319 18:28:05.787910  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:28:05.811216  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:28:05.845355  454179 logs.go:123] Gathering logs for kube-scheduler [1242fd605f8f8bda479c006b26e5ef6e64262ef3798382c28854736d0a321e65] ...
	I0319 18:28:05.845507  454179 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1242fd605f8f8bda479c006b26e5ef6e64262ef3798382c28854736d0a321e65"
	I0319 18:28:05.873318  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:28:05.920879  454179 logs.go:123] Gathering logs for kube-proxy [29952e4bb0baef8b930cb4c006b27c1e679d93f7f97f0aec980b9e7e7cb46a6e] ...
	I0319 18:28:05.920957  454179 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 29952e4bb0baef8b930cb4c006b27c1e679d93f7f97f0aec980b9e7e7cb46a6e"
	I0319 18:28:05.980202  454179 logs.go:123] Gathering logs for kindnet [837ccba751ee89f344a8bccb40d384a62d148749ec43854911b3b532a3a387ed] ...
	I0319 18:28:05.980228  454179 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 837ccba751ee89f344a8bccb40d384a62d148749ec43854911b3b532a3a387ed"
	I0319 18:28:06.034100  454179 logs.go:123] Gathering logs for CRI-O ...
	I0319 18:28:06.034133  454179 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0319 18:28:06.132431  454179 out.go:358] Setting ErrFile to fd 2...
	I0319 18:28:06.132464  454179 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0319 18:28:06.132523  454179 out.go:270] X Problems detected in kubelet:
	W0319 18:28:06.132539  454179 out.go:270]   Mar 19 18:27:09 addons-039972 kubelet[1532]: E0319 18:27:09.207043    1532 reflector.go:166] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-039972\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-039972' and this object" logger="UnhandledError"
	W0319 18:28:06.132547  454179 out.go:270]   Mar 19 18:27:09 addons-039972 kubelet[1532]: W0319 18:27:09.207085    1532 reflector.go:569] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-039972" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-039972' and this object
	W0319 18:28:06.132559  454179 out.go:270]   Mar 19 18:27:09 addons-039972 kubelet[1532]: E0319 18:27:09.207106    1532 reflector.go:166] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-039972\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-039972' and this object" logger="UnhandledError"
	W0319 18:28:06.132566  454179 out.go:270]   Mar 19 18:27:09 addons-039972 kubelet[1532]: W0319 18:27:09.207145    1532 reflector.go:569] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-039972" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-039972' and this object
	W0319 18:28:06.132579  454179 out.go:270]   Mar 19 18:27:09 addons-039972 kubelet[1532]: E0319 18:27:09.207159    1532 reflector.go:166] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-039972\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-039972' and this object" logger="UnhandledError"
	I0319 18:28:06.132587  454179 out.go:358] Setting ErrFile to fd 2...
	I0319 18:28:06.132593  454179 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0319 18:28:06.284609  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:28:06.309736  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:28:06.372121  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:28:06.787291  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:28:06.810120  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:28:06.871202  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:28:07.285105  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:28:07.310442  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:28:07.371887  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:28:07.786324  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:28:07.813747  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:28:07.872199  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:28:08.286687  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:28:08.309572  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:28:08.373356  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:28:08.784268  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:28:08.809302  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:28:08.871537  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:28:09.291021  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:28:09.310884  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:28:09.372129  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:28:09.788365  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:28:09.809025  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:28:09.871001  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:28:10.285127  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:28:10.309885  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:28:10.372425  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:28:10.786059  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:28:10.810034  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:28:10.886262  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:28:11.290380  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:28:11.310204  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:28:11.371767  454179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0319 18:28:11.785598  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:28:11.809772  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:28:11.887677  454179 kapi.go:107] duration metric: took 1m42.019656803s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0319 18:28:12.284490  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:28:12.309557  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:28:12.793412  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:28:12.816748  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:28:13.286089  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:28:13.310144  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:28:13.783915  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:28:13.809736  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:28:14.286811  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:28:14.310819  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:28:14.785289  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:28:14.809195  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:28:15.285698  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:28:15.310405  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:28:15.784216  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:28:15.810776  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:28:16.138473  454179 system_pods.go:59] 18 kube-system pods found
	I0319 18:28:16.138520  454179 system_pods.go:61] "coredns-668d6bf9bc-l8gjn" [85b0ecfa-8919-4305-b868-a2af1a298b85] Running
	I0319 18:28:16.138528  454179 system_pods.go:61] "csi-hostpath-attacher-0" [e642bc11-a37a-4378-a05e-c04a9e374277] Running
	I0319 18:28:16.138533  454179 system_pods.go:61] "csi-hostpath-resizer-0" [5b55b634-e254-4588-9993-ab087bb2be76] Running
	I0319 18:28:16.138551  454179 system_pods.go:61] "csi-hostpathplugin-2tpjt" [54f40f88-da07-414e-8b80-76e1b546c641] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0319 18:28:16.138561  454179 system_pods.go:61] "etcd-addons-039972" [3b29df3b-8202-47af-86bc-9a2c8858b81d] Running
	I0319 18:28:16.138567  454179 system_pods.go:61] "kindnet-rpwwd" [b2f638a0-90c8-42bb-b08f-34760a6f3f51] Running
	I0319 18:28:16.138576  454179 system_pods.go:61] "kube-apiserver-addons-039972" [7e96195c-f3da-44fe-aaf4-8d5b33f60036] Running
	I0319 18:28:16.138588  454179 system_pods.go:61] "kube-controller-manager-addons-039972" [dfcda3b0-1f27-44b9-8317-d5e318034883] Running
	I0319 18:28:16.138603  454179 system_pods.go:61] "kube-ingress-dns-minikube" [5d4fb645-032d-471e-abb5-89d6bd38ec6f] Running
	I0319 18:28:16.138608  454179 system_pods.go:61] "kube-proxy-n6nwg" [8ae14c8f-7f7a-4f2d-ba0b-58041647593a] Running
	I0319 18:28:16.138613  454179 system_pods.go:61] "kube-scheduler-addons-039972" [e2276fdc-bdf7-4c46-96a0-8f6de1d63076] Running
	I0319 18:28:16.138617  454179 system_pods.go:61] "metrics-server-7fbb699795-xtj74" [e1fa4fa3-31a6-4db3-a237-73516e02c68c] Running
	I0319 18:28:16.138621  454179 system_pods.go:61] "nvidia-device-plugin-daemonset-6qm78" [784e0b38-6971-40b8-b4b3-940ba70d5823] Running
	I0319 18:28:16.138625  454179 system_pods.go:61] "registry-6c88467877-d2brd" [9123c314-3886-4f1c-aacf-b378fea5fb39] Running
	I0319 18:28:16.138632  454179 system_pods.go:61] "registry-proxy-f7zs5" [70bb0c85-0653-4b37-8242-48ad64e1e791] Running
	I0319 18:28:16.138637  454179 system_pods.go:61] "snapshot-controller-68b874b76f-jqjfd" [bc9599a9-3c18-4b56-a2ac-0848d5dd5df2] Running
	I0319 18:28:16.138643  454179 system_pods.go:61] "snapshot-controller-68b874b76f-mtxws" [12009c24-3e2d-4a4c-a1cb-49866a422ef7] Running
	I0319 18:28:16.138648  454179 system_pods.go:61] "storage-provisioner" [bb636a26-16e9-46e5-9cb6-b9a8f28fb7cb] Running
	I0319 18:28:16.138676  454179 system_pods.go:74] duration metric: took 11.442957616s to wait for pod list to return data ...
	I0319 18:28:16.138690  454179 default_sa.go:34] waiting for default service account to be created ...
	I0319 18:28:16.141433  454179 default_sa.go:45] found service account: "default"
	I0319 18:28:16.141458  454179 default_sa.go:55] duration metric: took 2.760979ms for default service account to be created ...
	I0319 18:28:16.141467  454179 system_pods.go:116] waiting for k8s-apps to be running ...
	I0319 18:28:16.144994  454179 system_pods.go:86] 18 kube-system pods found
	I0319 18:28:16.145019  454179 system_pods.go:89] "coredns-668d6bf9bc-l8gjn" [85b0ecfa-8919-4305-b868-a2af1a298b85] Running
	I0319 18:28:16.145026  454179 system_pods.go:89] "csi-hostpath-attacher-0" [e642bc11-a37a-4378-a05e-c04a9e374277] Running
	I0319 18:28:16.145031  454179 system_pods.go:89] "csi-hostpath-resizer-0" [5b55b634-e254-4588-9993-ab087bb2be76] Running
	I0319 18:28:16.145039  454179 system_pods.go:89] "csi-hostpathplugin-2tpjt" [54f40f88-da07-414e-8b80-76e1b546c641] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0319 18:28:16.145044  454179 system_pods.go:89] "etcd-addons-039972" [3b29df3b-8202-47af-86bc-9a2c8858b81d] Running
	I0319 18:28:16.145063  454179 system_pods.go:89] "kindnet-rpwwd" [b2f638a0-90c8-42bb-b08f-34760a6f3f51] Running
	I0319 18:28:16.145073  454179 system_pods.go:89] "kube-apiserver-addons-039972" [7e96195c-f3da-44fe-aaf4-8d5b33f60036] Running
	I0319 18:28:16.145078  454179 system_pods.go:89] "kube-controller-manager-addons-039972" [dfcda3b0-1f27-44b9-8317-d5e318034883] Running
	I0319 18:28:16.145083  454179 system_pods.go:89] "kube-ingress-dns-minikube" [5d4fb645-032d-471e-abb5-89d6bd38ec6f] Running
	I0319 18:28:16.145087  454179 system_pods.go:89] "kube-proxy-n6nwg" [8ae14c8f-7f7a-4f2d-ba0b-58041647593a] Running
	I0319 18:28:16.145092  454179 system_pods.go:89] "kube-scheduler-addons-039972" [e2276fdc-bdf7-4c46-96a0-8f6de1d63076] Running
	I0319 18:28:16.145096  454179 system_pods.go:89] "metrics-server-7fbb699795-xtj74" [e1fa4fa3-31a6-4db3-a237-73516e02c68c] Running
	I0319 18:28:16.145103  454179 system_pods.go:89] "nvidia-device-plugin-daemonset-6qm78" [784e0b38-6971-40b8-b4b3-940ba70d5823] Running
	I0319 18:28:16.145108  454179 system_pods.go:89] "registry-6c88467877-d2brd" [9123c314-3886-4f1c-aacf-b378fea5fb39] Running
	I0319 18:28:16.145115  454179 system_pods.go:89] "registry-proxy-f7zs5" [70bb0c85-0653-4b37-8242-48ad64e1e791] Running
	I0319 18:28:16.145119  454179 system_pods.go:89] "snapshot-controller-68b874b76f-jqjfd" [bc9599a9-3c18-4b56-a2ac-0848d5dd5df2] Running
	I0319 18:28:16.145137  454179 system_pods.go:89] "snapshot-controller-68b874b76f-mtxws" [12009c24-3e2d-4a4c-a1cb-49866a422ef7] Running
	I0319 18:28:16.145141  454179 system_pods.go:89] "storage-provisioner" [bb636a26-16e9-46e5-9cb6-b9a8f28fb7cb] Running
	I0319 18:28:16.145147  454179 system_pods.go:126] duration metric: took 3.67569ms to wait for k8s-apps to be running ...
	I0319 18:28:16.145157  454179 system_svc.go:44] waiting for kubelet service to be running ....
	I0319 18:28:16.145227  454179 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 18:28:16.157831  454179 system_svc.go:56] duration metric: took 12.627765ms WaitForService to wait for kubelet
	I0319 18:28:16.157864  454179 kubeadm.go:582] duration metric: took 1m53.276401147s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0319 18:28:16.157886  454179 node_conditions.go:102] verifying NodePressure condition ...
	I0319 18:28:16.163268  454179 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0319 18:28:16.163352  454179 node_conditions.go:123] node cpu capacity is 2
	I0319 18:28:16.164153  454179 node_conditions.go:105] duration metric: took 6.247638ms to run NodePressure ...
	I0319 18:28:16.164199  454179 start.go:241] waiting for startup goroutines ...
	I0319 18:28:16.284344  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:28:16.310306  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:28:16.784670  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:28:16.809650  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:28:17.284287  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:28:17.310108  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:28:17.801168  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:28:17.814559  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:28:18.284236  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:28:18.309350  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:28:18.785239  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:28:18.810776  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:28:19.285164  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:28:19.308955  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:28:19.789393  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:28:19.809402  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:28:20.285700  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:28:20.309293  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:28:20.784759  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:28:20.809962  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0319 18:28:21.284701  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:28:21.385125  454179 kapi.go:107] duration metric: took 1m47.578914071s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0319 18:28:21.388169  454179 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-039972 cluster.
	I0319 18:28:21.390962  454179 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0319 18:28:21.393673  454179 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0319 18:28:21.785644  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:28:22.284653  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:28:22.784793  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:28:23.284084  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:28:23.784476  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:28:24.285179  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:28:24.784481  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:28:25.283740  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:28:25.790469  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:28:26.285620  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:28:26.783739  454179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0319 18:28:27.285281  454179 kapi.go:107] duration metric: took 1m57.004564757s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0319 18:28:27.288815  454179 out.go:177] * Enabled addons: nvidia-device-plugin, storage-provisioner, amd-gpu-device-plugin, storage-provisioner-rancher, ingress-dns, cloud-spanner, inspektor-gadget, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0319 18:28:27.291746  454179 addons.go:514] duration metric: took 2m4.409772596s for enable addons: enabled=[nvidia-device-plugin storage-provisioner amd-gpu-device-plugin storage-provisioner-rancher ingress-dns cloud-spanner inspektor-gadget metrics-server yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0319 18:28:27.291814  454179 start.go:246] waiting for cluster config update ...
	I0319 18:28:27.291840  454179 start.go:255] writing updated cluster config ...
	I0319 18:28:27.292151  454179 ssh_runner.go:195] Run: rm -f paused
	I0319 18:28:27.691857  454179 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0319 18:28:27.694959  454179 out.go:177] * Done! kubectl is now configured to use "addons-039972" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Mar 19 18:31:20 addons-039972 crio[981]: time="2025-03-19 18:31:20.074970789Z" level=info msg="Removed pod sandbox: 43da4b1cd3f47a4396588e9e6c74cf6e4bc7a831a3623e71cf18b525e1414f45" id=8bf2e602-715c-4c17-93f0-a78720b671dd name=/runtime.v1.RuntimeService/RemovePodSandbox
	Mar 19 18:32:37 addons-039972 crio[981]: time="2025-03-19 18:32:37.821120299Z" level=info msg="Running pod sandbox: default/hello-world-app-7d9564db4-5pnjt/POD" id=279d9d09-5122-4e74-9536-65695c167cd3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Mar 19 18:32:37 addons-039972 crio[981]: time="2025-03-19 18:32:37.821183511Z" level=warning msg="Allowed annotations are specified for workload []"
	Mar 19 18:32:37 addons-039972 crio[981]: time="2025-03-19 18:32:37.864600640Z" level=info msg="Got pod network &{Name:hello-world-app-7d9564db4-5pnjt Namespace:default ID:65d7df276b9628299232639eb40bf8dc08287d797c28d2bcc29498e925f4f5a0 UID:b7018b95-3ad6-41d0-96d2-e48b2dc782c1 NetNS:/var/run/netns/7187b613-f9e9-4dd5-bffc-2043bfd7474b Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Mar 19 18:32:37 addons-039972 crio[981]: time="2025-03-19 18:32:37.864647713Z" level=info msg="Adding pod default_hello-world-app-7d9564db4-5pnjt to CNI network \"kindnet\" (type=ptp)"
	Mar 19 18:32:37 addons-039972 crio[981]: time="2025-03-19 18:32:37.889948292Z" level=info msg="Got pod network &{Name:hello-world-app-7d9564db4-5pnjt Namespace:default ID:65d7df276b9628299232639eb40bf8dc08287d797c28d2bcc29498e925f4f5a0 UID:b7018b95-3ad6-41d0-96d2-e48b2dc782c1 NetNS:/var/run/netns/7187b613-f9e9-4dd5-bffc-2043bfd7474b Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Mar 19 18:32:37 addons-039972 crio[981]: time="2025-03-19 18:32:37.890099808Z" level=info msg="Checking pod default_hello-world-app-7d9564db4-5pnjt for CNI network kindnet (type=ptp)"
	Mar 19 18:32:37 addons-039972 crio[981]: time="2025-03-19 18:32:37.893479916Z" level=info msg="Ran pod sandbox 65d7df276b9628299232639eb40bf8dc08287d797c28d2bcc29498e925f4f5a0 with infra container: default/hello-world-app-7d9564db4-5pnjt/POD" id=279d9d09-5122-4e74-9536-65695c167cd3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Mar 19 18:32:37 addons-039972 crio[981]: time="2025-03-19 18:32:37.898230925Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=c70b93d0-b505-4a15-8a02-6b5750025c08 name=/runtime.v1.ImageService/ImageStatus
	Mar 19 18:32:37 addons-039972 crio[981]: time="2025-03-19 18:32:37.898470171Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=c70b93d0-b505-4a15-8a02-6b5750025c08 name=/runtime.v1.ImageService/ImageStatus
	Mar 19 18:32:37 addons-039972 crio[981]: time="2025-03-19 18:32:37.901340927Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=079ce357-9556-40e8-a0e3-3af2e2c0d0b1 name=/runtime.v1.ImageService/PullImage
	Mar 19 18:32:37 addons-039972 crio[981]: time="2025-03-19 18:32:37.904911361Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Mar 19 18:32:38 addons-039972 crio[981]: time="2025-03-19 18:32:38.209781109Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Mar 19 18:32:38 addons-039972 crio[981]: time="2025-03-19 18:32:38.972853125Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6" id=079ce357-9556-40e8-a0e3-3af2e2c0d0b1 name=/runtime.v1.ImageService/PullImage
	Mar 19 18:32:38 addons-039972 crio[981]: time="2025-03-19 18:32:38.973806417Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=43079034-0b34-4864-8a3a-8e8210c08a4d name=/runtime.v1.ImageService/ImageStatus
	Mar 19 18:32:38 addons-039972 crio[981]: time="2025-03-19 18:32:38.974464331Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b],Size_:4789170,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=43079034-0b34-4864-8a3a-8e8210c08a4d name=/runtime.v1.ImageService/ImageStatus
	Mar 19 18:32:38 addons-039972 crio[981]: time="2025-03-19 18:32:38.981086404Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=734b6bc6-405c-4284-89ad-4d54ad6a5af8 name=/runtime.v1.ImageService/ImageStatus
	Mar 19 18:32:38 addons-039972 crio[981]: time="2025-03-19 18:32:38.981836306Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b],Size_:4789170,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=734b6bc6-405c-4284-89ad-4d54ad6a5af8 name=/runtime.v1.ImageService/ImageStatus
	Mar 19 18:32:38 addons-039972 crio[981]: time="2025-03-19 18:32:38.990070234Z" level=info msg="Creating container: default/hello-world-app-7d9564db4-5pnjt/hello-world-app" id=6cd5631d-6951-4e93-807e-7f65b9498174 name=/runtime.v1.RuntimeService/CreateContainer
	Mar 19 18:32:38 addons-039972 crio[981]: time="2025-03-19 18:32:38.990174726Z" level=warning msg="Allowed annotations are specified for workload []"
	Mar 19 18:32:39 addons-039972 crio[981]: time="2025-03-19 18:32:39.022251405Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/1edf0020c97f03818eb980a900050a293174de9d26c016d602c5faf6d38d7314/merged/etc/passwd: no such file or directory"
	Mar 19 18:32:39 addons-039972 crio[981]: time="2025-03-19 18:32:39.022298511Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/1edf0020c97f03818eb980a900050a293174de9d26c016d602c5faf6d38d7314/merged/etc/group: no such file or directory"
	Mar 19 18:32:39 addons-039972 crio[981]: time="2025-03-19 18:32:39.068940867Z" level=info msg="Created container 552f879b31079c70f25207ae5ee5455a114636c166569b494faa0e7082ea0dc7: default/hello-world-app-7d9564db4-5pnjt/hello-world-app" id=6cd5631d-6951-4e93-807e-7f65b9498174 name=/runtime.v1.RuntimeService/CreateContainer
	Mar 19 18:32:39 addons-039972 crio[981]: time="2025-03-19 18:32:39.070376220Z" level=info msg="Starting container: 552f879b31079c70f25207ae5ee5455a114636c166569b494faa0e7082ea0dc7" id=8a0dd737-464c-4a69-8116-bde38cfd5f8b name=/runtime.v1.RuntimeService/StartContainer
	Mar 19 18:32:39 addons-039972 crio[981]: time="2025-03-19 18:32:39.080293805Z" level=info msg="Started container" PID=9271 containerID=552f879b31079c70f25207ae5ee5455a114636c166569b494faa0e7082ea0dc7 description=default/hello-world-app-7d9564db4-5pnjt/hello-world-app id=8a0dd737-464c-4a69-8116-bde38cfd5f8b name=/runtime.v1.RuntimeService/StartContainer sandboxID=65d7df276b9628299232639eb40bf8dc08287d797c28d2bcc29498e925f4f5a0
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD
	552f879b31079       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        Less than a second ago   Running             hello-world-app           0                   65d7df276b962       hello-world-app-7d9564db4-5pnjt
	50ca9b75879cb       docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591                              2 minutes ago            Running             nginx                     0                   7f414a05d8f1e       nginx
	5a7c7aae15878       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          4 minutes ago            Running             busybox                   0                   48afb26e67ce2       busybox
	6fbcfadd1267e       registry.k8s.io/ingress-nginx/controller@sha256:787a5408fa511266888b2e765f9666bee67d9bf2518a6b7cfd4ab6cc01c22eee             4 minutes ago            Running             controller                0                   a1045ba71894b       ingress-nginx-controller-56d7c84fd4-xpqbn
	dbbc2d4772e45       d54655ed3a8543a162b688a24bf969ee1a28d906b8ccb30188059247efdae234                                                             4 minutes ago            Exited              patch                     2                   6cbf20e67ee47       ingress-nginx-admission-patch-tm8gk
	409c7d5caf4d5       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:0550b75a965592f1dde3fbeaa98f67a1e10c5a086bcd69a29054cc4edcb56771   5 minutes ago            Exited              create                    0                   51734de4039ea       ingress-nginx-admission-create-n6lw7
	4c240c6e79ca8       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c             5 minutes ago            Running             minikube-ingress-dns      0                   392ba468692ac       kube-ingress-dns-minikube
	7a1f7f73e3286       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4                                                             5 minutes ago            Running             coredns                   0                   937c42bba902d       coredns-668d6bf9bc-l8gjn
	9d928cb24e6c5       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             5 minutes ago            Running             storage-provisioner       0                   77b3184faa5b5       storage-provisioner
	837ccba751ee8       docker.io/kindest/kindnetd@sha256:86c933f3845d6a993c8f64632752b10aae67a4756c59096b3259426e839be955                           6 minutes ago            Running             kindnet-cni               0                   f537e3e759c23       kindnet-rpwwd
	29952e4bb0bae       e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062                                                             6 minutes ago            Running             kube-proxy                0                   d467bf496cda9       kube-proxy-n6nwg
	1242fd605f8f8       82dfa03f692fb5d84f66c17d6ee9126b081182152b25d28ea456d89b7d5d8911                                                             6 minutes ago            Running             kube-scheduler            0                   8fa651295788f       kube-scheduler-addons-039972
	3c993a11f32ae       3c9285acfd2ff7915bb451cc40ac060366ac519f3fef00c455f5aca0e0346c4d                                                             6 minutes ago            Running             kube-controller-manager   0                   d40e7339dd603       kube-controller-manager-addons-039972
	41b8d0c3e7f5e       7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82                                                             6 minutes ago            Running             etcd                      0                   8af7a7ad80d3d       etcd-addons-039972
	333cd2a181c96       6417e1437b6d9a789e1ca789695a574e1df00a632bdbfbcae9695c9a7d500e32                                                             6 minutes ago            Running             kube-apiserver            0                   3c1106d48388a       kube-apiserver-addons-039972
	
	
	==> coredns [7a1f7f73e3286762731132c336a200a5b40143c52a5a03e861237c6154a0ef85] <==
	[INFO] 10.244.0.11:39156 - 42922 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.003052394s
	[INFO] 10.244.0.11:39156 - 23273 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000177486s
	[INFO] 10.244.0.11:39156 - 53271 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000104017s
	[INFO] 10.244.0.11:41045 - 56335 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00017989s
	[INFO] 10.244.0.11:41045 - 56812 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000247016s
	[INFO] 10.244.0.11:50078 - 19952 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000146273s
	[INFO] 10.244.0.11:50078 - 19515 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000260522s
	[INFO] 10.244.0.11:58929 - 6663 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000115315s
	[INFO] 10.244.0.11:58929 - 6442 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000073862s
	[INFO] 10.244.0.11:53797 - 40698 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002118022s
	[INFO] 10.244.0.11:53797 - 40204 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002135294s
	[INFO] 10.244.0.11:60729 - 42419 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000269941s
	[INFO] 10.244.0.11:60729 - 42600 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000514201s
	[INFO] 10.244.0.21:58189 - 34493 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000173818s
	[INFO] 10.244.0.21:35292 - 34729 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000183278s
	[INFO] 10.244.0.21:58905 - 35958 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000164981s
	[INFO] 10.244.0.21:56277 - 48052 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00010537s
	[INFO] 10.244.0.21:49923 - 42445 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000121822s
	[INFO] 10.244.0.21:42748 - 58033 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00012407s
	[INFO] 10.244.0.21:59172 - 39431 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002233149s
	[INFO] 10.244.0.21:40078 - 3330 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002064426s
	[INFO] 10.244.0.21:53441 - 57115 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.003109296s
	[INFO] 10.244.0.21:43075 - 44835 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.004558934s
	[INFO] 10.244.0.24:32787 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000187537s
	[INFO] 10.244.0.24:49189 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00094941s
	
	
	==> describe nodes <==
	Name:               addons-039972
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-039972
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d76a625434f413a89ad1bb610dea10300ea9201f
	                    minikube.k8s.io/name=addons-039972
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_03_19T18_26_18_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-039972
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Mar 2025 18:26:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-039972
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Mar 2025 18:32:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Mar 2025 18:30:53 +0000   Wed, 19 Mar 2025 18:26:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Mar 2025 18:30:53 +0000   Wed, 19 Mar 2025 18:26:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Mar 2025 18:30:53 +0000   Wed, 19 Mar 2025 18:26:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Mar 2025 18:30:53 +0000   Wed, 19 Mar 2025 18:27:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-039972
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 17cc134590ad41e5894ca61fd4821712
	  System UUID:                b4f7a4d8-cb40-41f6-a838-adfa2fbbb95e
	  Boot ID:                    48f0ca68-a8da-47aa-b0f9-4e2bea015ace
	  Kernel Version:             5.15.0-1077-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m11s
	  default                     hello-world-app-7d9564db4-5pnjt              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  ingress-nginx               ingress-nginx-controller-56d7c84fd4-xpqbn    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         6m10s
	  kube-system                 coredns-668d6bf9bc-l8gjn                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     6m16s
	  kube-system                 etcd-addons-039972                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         6m22s
	  kube-system                 kindnet-rpwwd                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      6m17s
	  kube-system                 kube-apiserver-addons-039972                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m22s
	  kube-system                 kube-controller-manager-addons-039972        200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m22s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m11s
	  kube-system                 kube-proxy-n6nwg                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m17s
	  kube-system                 kube-scheduler-addons-039972                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m22s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             310Mi (3%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 6m10s  kube-proxy       
	  Normal   Starting                 6m22s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m22s  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  6m22s  kubelet          Node addons-039972 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m22s  kubelet          Node addons-039972 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m22s  kubelet          Node addons-039972 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6m18s  node-controller  Node addons-039972 event: Registered Node addons-039972 in Controller
	  Normal   NodeReady                5m30s  kubelet          Node addons-039972 status is now: NodeReady
	
	
	==> dmesg <==
	[Mar19 17:30] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Mar19 17:52] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [41b8d0c3e7f5ecc5a526c4244baad0b485548bf2f4c82c6860959c4074f38e53] <==
	{"level":"info","ts":"2025-03-19T18:26:12.315516Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-03-19T18:26:12.316416Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2025-03-19T18:26:12.316834Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2025-03-19T18:26:12.316950Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-03-19T18:26:12.317010Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-03-19T18:26:12.317894Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-03-19T18:26:12.318674Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-03-19T18:26:12.319888Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-03-19T18:26:12.319918Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-03-19T18:26:26.141624Z","caller":"traceutil/trace.go:171","msg":"trace[1792440558] transaction","detail":"{read_only:false; response_revision:347; number_of_response:1; }","duration":"100.048516ms","start":"2025-03-19T18:26:26.041552Z","end":"2025-03-19T18:26:26.141600Z","steps":["trace[1792440558] 'process raft request'  (duration: 64.106777ms)"],"step_count":1}
	{"level":"warn","ts":"2025-03-19T18:26:26.144319Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.679206ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-n6nwg\" limit:1 ","response":"range_response_count:1 size:4833"}
	{"level":"info","ts":"2025-03-19T18:26:26.144390Z","caller":"traceutil/trace.go:171","msg":"trace[1938743100] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-n6nwg; range_end:; response_count:1; response_revision:347; }","duration":"102.761946ms","start":"2025-03-19T18:26:26.041614Z","end":"2025-03-19T18:26:26.144376Z","steps":["trace[1938743100] 'agreement among raft nodes before linearized reading'  (duration: 64.816162ms)","trace[1938743100] 'range keys from in-memory index tree'  (duration: 37.831183ms)"],"step_count":2}
	{"level":"warn","ts":"2025-03-19T18:26:27.908979Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"118.693625ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-03-19T18:26:27.990220Z","caller":"traceutil/trace.go:171","msg":"trace[1478198630] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:393; }","duration":"199.94052ms","start":"2025-03-19T18:26:27.790257Z","end":"2025-03-19T18:26:27.990197Z","steps":["trace[1478198630] 'agreement among raft nodes before linearized reading'  (duration: 118.668747ms)"],"step_count":1}
	{"level":"info","ts":"2025-03-19T18:26:28.037620Z","caller":"traceutil/trace.go:171","msg":"trace[1234663893] transaction","detail":"{read_only:false; response_revision:394; number_of_response:1; }","duration":"174.02272ms","start":"2025-03-19T18:26:27.863575Z","end":"2025-03-19T18:26:28.037597Z","steps":["trace[1234663893] 'process raft request'  (duration: 151.728433ms)","trace[1234663893] 'compare'  (duration: 22.145593ms)"],"step_count":2}
	{"level":"info","ts":"2025-03-19T18:26:28.038404Z","caller":"traceutil/trace.go:171","msg":"trace[583122903] transaction","detail":"{read_only:false; response_revision:395; number_of_response:1; }","duration":"126.413268ms","start":"2025-03-19T18:26:27.911981Z","end":"2025-03-19T18:26:28.038394Z","steps":["trace[583122903] 'process raft request'  (duration: 125.573963ms)"],"step_count":1}
	{"level":"info","ts":"2025-03-19T18:26:28.041020Z","caller":"traceutil/trace.go:171","msg":"trace[1831024866] transaction","detail":"{read_only:false; response_revision:396; number_of_response:1; }","duration":"100.594669ms","start":"2025-03-19T18:26:27.940408Z","end":"2025-03-19T18:26:28.041002Z","steps":["trace[1831024866] 'process raft request'  (duration: 97.663827ms)"],"step_count":1}
	{"level":"info","ts":"2025-03-19T18:26:28.074082Z","caller":"traceutil/trace.go:171","msg":"trace[1935814687] transaction","detail":"{read_only:false; response_revision:397; number_of_response:1; }","duration":"133.563524ms","start":"2025-03-19T18:26:27.940502Z","end":"2025-03-19T18:26:28.074066Z","steps":["trace[1935814687] 'process raft request'  (duration: 115.952873ms)"],"step_count":1}
	{"level":"info","ts":"2025-03-19T18:26:28.075953Z","caller":"traceutil/trace.go:171","msg":"trace[333497609] linearizableReadLoop","detail":"{readStateIndex:406; appliedIndex:404; }","duration":"135.898728ms","start":"2025-03-19T18:26:27.940035Z","end":"2025-03-19T18:26:28.075934Z","steps":["trace[333497609] 'read index received'  (duration: 65.224747ms)","trace[333497609] 'applied index is now lower than readState.Index'  (duration: 70.672332ms)"],"step_count":2}
	{"level":"warn","ts":"2025-03-19T18:26:28.076040Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"135.982773ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/local-path-storage/local-path-provisioner-service-account\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-03-19T18:26:28.081847Z","caller":"traceutil/trace.go:171","msg":"trace[127351541] range","detail":"{range_begin:/registry/serviceaccounts/local-path-storage/local-path-provisioner-service-account; range_end:; response_count:0; response_revision:405; }","duration":"141.797113ms","start":"2025-03-19T18:26:27.940030Z","end":"2025-03-19T18:26:28.081827Z","steps":["trace[127351541] 'agreement among raft nodes before linearized reading'  (duration: 135.954145ms)"],"step_count":1}
	{"level":"warn","ts":"2025-03-19T18:26:28.082104Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"141.972342ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/gadget\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-03-19T18:26:28.082275Z","caller":"traceutil/trace.go:171","msg":"trace[1346491720] range","detail":"{range_begin:/registry/namespaces/gadget; range_end:; response_count:0; response_revision:405; }","duration":"142.153964ms","start":"2025-03-19T18:26:27.940111Z","end":"2025-03-19T18:26:28.082265Z","steps":["trace[1346491720] 'agreement among raft nodes before linearized reading'  (duration: 141.948999ms)"],"step_count":1}
	{"level":"warn","ts":"2025-03-19T18:26:28.083197Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"143.038291ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/minikube-ingress-dns\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-03-19T18:26:28.097929Z","caller":"traceutil/trace.go:171","msg":"trace[305940122] range","detail":"{range_begin:/registry/clusterrolebindings/minikube-ingress-dns; range_end:; response_count:0; response_revision:405; }","duration":"157.760144ms","start":"2025-03-19T18:26:27.940145Z","end":"2025-03-19T18:26:28.097905Z","steps":["trace[305940122] 'agreement among raft nodes before linearized reading'  (duration: 143.027099ms)"],"step_count":1}
	
	
	==> kernel <==
	 18:32:39 up  2:14,  0 users,  load average: 0.30, 1.67, 2.82
	Linux addons-039972 5.15.0-1077-aws #84~20.04.1-Ubuntu SMP Mon Jan 20 22:14:27 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [837ccba751ee89f344a8bccb40d384a62d148749ec43854911b3b532a3a387ed] <==
	I0319 18:30:38.857889       1 main.go:301] handling current node
	I0319 18:30:48.851757       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0319 18:30:48.851789       1 main.go:301] handling current node
	I0319 18:30:58.851738       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0319 18:30:58.851771       1 main.go:301] handling current node
	I0319 18:31:08.854819       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0319 18:31:08.854859       1 main.go:301] handling current node
	I0319 18:31:18.857390       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0319 18:31:18.857421       1 main.go:301] handling current node
	I0319 18:31:28.851481       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0319 18:31:28.851516       1 main.go:301] handling current node
	I0319 18:31:38.857618       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0319 18:31:38.857738       1 main.go:301] handling current node
	I0319 18:31:48.851758       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0319 18:31:48.851791       1 main.go:301] handling current node
	I0319 18:31:58.857860       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0319 18:31:58.857892       1 main.go:301] handling current node
	I0319 18:32:08.858266       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0319 18:32:08.858297       1 main.go:301] handling current node
	I0319 18:32:18.855857       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0319 18:32:18.855890       1 main.go:301] handling current node
	I0319 18:32:28.851583       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0319 18:32:28.851614       1 main.go:301] handling current node
	I0319 18:32:38.851704       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0319 18:32:38.851739       1 main.go:301] handling current node
	
	
	==> kube-apiserver [333cd2a181c964ec63f1aa797727ce5f24bc61bef4c474e14e9b151bf967aad8] <==
	E0319 18:27:29.162730       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.46.59:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.46.59:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.46.59:443: connect: connection refused" logger="UnhandledError"
	I0319 18:27:29.331004       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0319 18:28:38.809629       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:49914: use of closed network connection
	E0319 18:28:39.061671       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:49934: use of closed network connection
	E0319 18:28:39.214514       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:49964: use of closed network connection
	I0319 18:28:48.612312       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.97.213.119"}
	I0319 18:29:30.162796       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	E0319 18:29:36.499779       1 authentication.go:74] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0319 18:29:57.061420       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0319 18:30:10.620467       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0319 18:30:11.750388       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0319 18:30:16.213065       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0319 18:30:16.611979       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.99.0.128"}
	I0319 18:30:19.017200       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0319 18:30:19.018646       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0319 18:30:19.079772       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0319 18:30:19.082586       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0319 18:30:19.127308       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0319 18:30:19.127443       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0319 18:30:19.305260       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0319 18:30:19.305385       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0319 18:30:20.306264       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0319 18:30:20.315603       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0319 18:30:20.384174       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0319 18:32:37.760421       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.103.229.222"}
	
	
	==> kube-controller-manager [3c993a11f32ae20f56562d64cf6abd31bd8ec469975f90ea69baf857b1af24e9] <==
	W0319 18:31:42.167356       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0319 18:31:42.169297       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshots"
	W0319 18:31:42.170802       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0319 18:31:42.170864       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0319 18:32:04.843264       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0319 18:32:04.844288       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotclasses"
	W0319 18:32:04.845151       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0319 18:32:04.845191       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0319 18:32:07.227444       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0319 18:32:07.228590       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
	W0319 18:32:07.229611       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0319 18:32:07.229649       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0319 18:32:10.752577       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0319 18:32:10.753730       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotcontents"
	W0319 18:32:10.754783       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0319 18:32:10.754824       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0319 18:32:16.057921       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0319 18:32:16.059248       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshots"
	W0319 18:32:16.060242       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0319 18:32:16.060331       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0319 18:32:37.529519       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="42.109867ms"
	I0319 18:32:37.543322       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="13.638863ms"
	I0319 18:32:37.543581       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="36.751µs"
	I0319 18:32:39.172526       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="12.577551ms"
	I0319 18:32:39.172669       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="37.194µs"
	
	
	==> kube-proxy [29952e4bb0baef8b930cb4c006b27c1e679d93f7f97f0aec980b9e7e7cb46a6e] <==
	I0319 18:26:28.525761       1 server_linux.go:66] "Using iptables proxy"
	I0319 18:26:29.169216       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0319 18:26:29.169285       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0319 18:26:29.418678       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0319 18:26:29.418801       1 server_linux.go:170] "Using iptables Proxier"
	I0319 18:26:29.421238       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0319 18:26:29.421623       1 server.go:497] "Version info" version="v1.32.2"
	I0319 18:26:29.436610       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0319 18:26:29.438282       1 config.go:199] "Starting service config controller"
	I0319 18:26:29.438357       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0319 18:26:29.438392       1 config.go:105] "Starting endpoint slice config controller"
	I0319 18:26:29.438397       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0319 18:26:29.443961       1 config.go:329] "Starting node config controller"
	I0319 18:26:29.449757       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0319 18:26:29.544931       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0319 18:26:29.544974       1 shared_informer.go:320] Caches are synced for service config
	I0319 18:26:29.950492       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [1242fd605f8f8bda479c006b26e5ef6e64262ef3798382c28854736d0a321e65] <==
	W0319 18:26:16.135061       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0319 18:26:16.135208       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0319 18:26:16.135226       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0319 18:26:16.135202       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0319 18:26:16.135257       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0319 18:26:16.135314       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0319 18:26:16.135332       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	E0319 18:26:16.135359       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0319 18:26:16.135367       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0319 18:26:16.135440       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0319 18:26:16.135459       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0319 18:26:16.135439       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0319 18:26:16.135503       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0319 18:26:16.135519       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0319 18:26:16.135571       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0319 18:26:16.135587       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0319 18:26:16.135625       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0319 18:26:16.135643       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0319 18:26:16.135287       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0319 18:26:16.135665       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0319 18:26:16.135742       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0319 18:26:16.135807       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0319 18:26:16.136798       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0319 18:26:16.136827       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0319 18:26:17.029125       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 19 18:32:17 addons-039972 kubelet[1532]: E0319 18:32:17.851957    1532 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/cb7f5010edbbe31fa7452a97f7295159c0f8a0c8043297c01e2b123ca2a244b0/diff" to get inode usage: stat /var/lib/containers/storage/overlay/cb7f5010edbbe31fa7452a97f7295159c0f8a0c8043297c01e2b123ca2a244b0/diff: no such file or directory, extraDiskErr: <nil>
	Mar 19 18:32:17 addons-039972 kubelet[1532]: E0319 18:32:17.858222    1532 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/2074cd03186c3953d48465949630f866653e9ddfd6d0d870ce0001ec4f8b964d/diff" to get inode usage: stat /var/lib/containers/storage/overlay/2074cd03186c3953d48465949630f866653e9ddfd6d0d870ce0001ec4f8b964d/diff: no such file or directory, extraDiskErr: <nil>
	Mar 19 18:32:17 addons-039972 kubelet[1532]: E0319 18:32:17.866588    1532 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/bd8ea1bf642fcbe0649a84e715aff000367039c30d16feddd272c6fddf7d2b63/diff" to get inode usage: stat /var/lib/containers/storage/overlay/bd8ea1bf642fcbe0649a84e715aff000367039c30d16feddd272c6fddf7d2b63/diff: no such file or directory, extraDiskErr: <nil>
	Mar 19 18:32:17 addons-039972 kubelet[1532]: E0319 18:32:17.866750    1532 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/b20d8cddaf1fe64671cc06e70fb5456fb67e2f30299ebe0818e8895608b9e31e/diff" to get inode usage: stat /var/lib/containers/storage/overlay/b20d8cddaf1fe64671cc06e70fb5456fb67e2f30299ebe0818e8895608b9e31e/diff: no such file or directory, extraDiskErr: <nil>
	Mar 19 18:32:17 addons-039972 kubelet[1532]: E0319 18:32:17.869081    1532 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/368d6dc9614bdc0ca83e17727cb7f145c947fd5212b8fa42cb20d23821d86ce1/diff" to get inode usage: stat /var/lib/containers/storage/overlay/368d6dc9614bdc0ca83e17727cb7f145c947fd5212b8fa42cb20d23821d86ce1/diff: no such file or directory, extraDiskErr: <nil>
	Mar 19 18:32:17 addons-039972 kubelet[1532]: E0319 18:32:17.877462    1532 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/cb7f5010edbbe31fa7452a97f7295159c0f8a0c8043297c01e2b123ca2a244b0/diff" to get inode usage: stat /var/lib/containers/storage/overlay/cb7f5010edbbe31fa7452a97f7295159c0f8a0c8043297c01e2b123ca2a244b0/diff: no such file or directory, extraDiskErr: <nil>
	Mar 19 18:32:17 addons-039972 kubelet[1532]: E0319 18:32:17.928191    1532 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1742409137927951345,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:605732,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Mar 19 18:32:17 addons-039972 kubelet[1532]: E0319 18:32:17.928229    1532 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1742409137927951345,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:605732,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Mar 19 18:32:27 addons-039972 kubelet[1532]: E0319 18:32:27.931349    1532 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1742409147931056863,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:605732,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Mar 19 18:32:27 addons-039972 kubelet[1532]: E0319 18:32:27.931395    1532 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1742409147931056863,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:605732,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Mar 19 18:32:37 addons-039972 kubelet[1532]: I0319 18:32:37.519775    1532 memory_manager.go:355] "RemoveStaleState removing state" podUID="12009c24-3e2d-4a4c-a1cb-49866a422ef7" containerName="volume-snapshot-controller"
	Mar 19 18:32:37 addons-039972 kubelet[1532]: I0319 18:32:37.519821    1532 memory_manager.go:355] "RemoveStaleState removing state" podUID="54f40f88-da07-414e-8b80-76e1b546c641" containerName="csi-external-health-monitor-controller"
	Mar 19 18:32:37 addons-039972 kubelet[1532]: I0319 18:32:37.519829    1532 memory_manager.go:355] "RemoveStaleState removing state" podUID="54f40f88-da07-414e-8b80-76e1b546c641" containerName="node-driver-registrar"
	Mar 19 18:32:37 addons-039972 kubelet[1532]: I0319 18:32:37.519837    1532 memory_manager.go:355] "RemoveStaleState removing state" podUID="54f40f88-da07-414e-8b80-76e1b546c641" containerName="csi-snapshotter"
	Mar 19 18:32:37 addons-039972 kubelet[1532]: I0319 18:32:37.519843    1532 memory_manager.go:355] "RemoveStaleState removing state" podUID="54f40f88-da07-414e-8b80-76e1b546c641" containerName="hostpath"
	Mar 19 18:32:37 addons-039972 kubelet[1532]: I0319 18:32:37.519849    1532 memory_manager.go:355] "RemoveStaleState removing state" podUID="54f40f88-da07-414e-8b80-76e1b546c641" containerName="csi-provisioner"
	Mar 19 18:32:37 addons-039972 kubelet[1532]: I0319 18:32:37.519857    1532 memory_manager.go:355] "RemoveStaleState removing state" podUID="bc9599a9-3c18-4b56-a2ac-0848d5dd5df2" containerName="volume-snapshot-controller"
	Mar 19 18:32:37 addons-039972 kubelet[1532]: I0319 18:32:37.519863    1532 memory_manager.go:355] "RemoveStaleState removing state" podUID="4190286b-8f85-4671-b992-ce15d1c7eae8" containerName="task-pv-container"
	Mar 19 18:32:37 addons-039972 kubelet[1532]: I0319 18:32:37.519871    1532 memory_manager.go:355] "RemoveStaleState removing state" podUID="5b55b634-e254-4588-9993-ab087bb2be76" containerName="csi-resizer"
	Mar 19 18:32:37 addons-039972 kubelet[1532]: I0319 18:32:37.519876    1532 memory_manager.go:355] "RemoveStaleState removing state" podUID="e642bc11-a37a-4378-a05e-c04a9e374277" containerName="csi-attacher"
	Mar 19 18:32:37 addons-039972 kubelet[1532]: I0319 18:32:37.519882    1532 memory_manager.go:355] "RemoveStaleState removing state" podUID="54f40f88-da07-414e-8b80-76e1b546c641" containerName="liveness-probe"
	Mar 19 18:32:37 addons-039972 kubelet[1532]: I0319 18:32:37.540438    1532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhdvg\" (UniqueName: \"kubernetes.io/projected/b7018b95-3ad6-41d0-96d2-e48b2dc782c1-kube-api-access-xhdvg\") pod \"hello-world-app-7d9564db4-5pnjt\" (UID: \"b7018b95-3ad6-41d0-96d2-e48b2dc782c1\") " pod="default/hello-world-app-7d9564db4-5pnjt"
	Mar 19 18:32:37 addons-039972 kubelet[1532]: W0319 18:32:37.892038    1532 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/fb7c8782a39ae80b81aae63592478f5dedf64112370bdf789812bdc35838b44e/crio-65d7df276b9628299232639eb40bf8dc08287d797c28d2bcc29498e925f4f5a0 WatchSource:0}: Error finding container 65d7df276b9628299232639eb40bf8dc08287d797c28d2bcc29498e925f4f5a0: Status 404 returned error can't find the container with id 65d7df276b9628299232639eb40bf8dc08287d797c28d2bcc29498e925f4f5a0
	Mar 19 18:32:37 addons-039972 kubelet[1532]: E0319 18:32:37.934105    1532 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1742409157933860824,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:605732,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Mar 19 18:32:37 addons-039972 kubelet[1532]: E0319 18:32:37.934139    1532 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1742409157933860824,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:605732,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [9d928cb24e6c582e24f6c27dc075bc3ceb29eacaf82949b297f36a9e3c7f0602] <==
	I0319 18:27:09.967588       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0319 18:27:09.981321       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0319 18:27:09.981435       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0319 18:27:09.995428       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0319 18:27:09.995564       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0dd0349e-deba-4f81-aaa6-7c45bd176a78", APIVersion:"v1", ResourceVersion:"889", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-039972_3bf4dd79-742a-465a-a08a-5dbb96fedd95 became leader
	I0319 18:27:09.996216       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-039972_3bf4dd79-742a-465a-a08a-5dbb96fedd95!
	I0319 18:27:10.096644       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-039972_3bf4dd79-742a-465a-a08a-5dbb96fedd95!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-039972 -n addons-039972
helpers_test.go:261: (dbg) Run:  kubectl --context addons-039972 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-n6lw7 ingress-nginx-admission-patch-tm8gk
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-039972 describe pod ingress-nginx-admission-create-n6lw7 ingress-nginx-admission-patch-tm8gk
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-039972 describe pod ingress-nginx-admission-create-n6lw7 ingress-nginx-admission-patch-tm8gk: exit status 1 (80.400084ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-n6lw7" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-tm8gk" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-039972 describe pod ingress-nginx-admission-create-n6lw7 ingress-nginx-admission-patch-tm8gk: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-039972 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-039972 addons disable ingress-dns --alsologtostderr -v=1: (1.348072241s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-039972 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-039972 addons disable ingress --alsologtostderr -v=1: (7.808649809s)
--- FAIL: TestAddons/parallel/Ingress (154.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (2.88s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-935881 image list --format=json
start_stop_delete_test.go:302: v1.32.2 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.16-0",
- 	"registry.k8s.io/kube-apiserver:v1.32.2",
- 	"registry.k8s.io/kube-controller-manager:v1.32.2",
- 	"registry.k8s.io/kube-proxy:v1.32.2",
- 	"registry.k8s.io/kube-scheduler:v1.32.2",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/VerifyKubernetesImages]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-935881
helpers_test.go:235: (dbg) docker inspect newest-cni-935881:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e5afa4e1b83eced6141a8bd03353d14a0e35d3fd0013b5c321e129cb5f7afaac",
	        "Created": "2025-03-19T19:25:02.222587071Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 673436,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-03-19T19:25:33.228760619Z",
	            "FinishedAt": "2025-03-19T19:25:32.377640891Z"
	        },
	        "Image": "sha256:df0c2544fb3106b890f0a9ab81fcf49f97edb092b83e47f42288ad5dfe1f4b40",
	        "ResolvConfPath": "/var/lib/docker/containers/e5afa4e1b83eced6141a8bd03353d14a0e35d3fd0013b5c321e129cb5f7afaac/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e5afa4e1b83eced6141a8bd03353d14a0e35d3fd0013b5c321e129cb5f7afaac/hostname",
	        "HostsPath": "/var/lib/docker/containers/e5afa4e1b83eced6141a8bd03353d14a0e35d3fd0013b5c321e129cb5f7afaac/hosts",
	        "LogPath": "/var/lib/docker/containers/e5afa4e1b83eced6141a8bd03353d14a0e35d3fd0013b5c321e129cb5f7afaac/e5afa4e1b83eced6141a8bd03353d14a0e35d3fd0013b5c321e129cb5f7afaac-json.log",
	        "Name": "/newest-cni-935881",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-935881:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-935881",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e5afa4e1b83eced6141a8bd03353d14a0e35d3fd0013b5c321e129cb5f7afaac",
	                "LowerDir": "/var/lib/docker/overlay2/c4d2d0d19f542ac77777a15173b450c5e70d28bcf65b6078f6c9c765b8d68f8e-init/diff:/var/lib/docker/overlay2/55bf5981cfa2c5a324266a998a6b44d59c28d371542dcf93ef413ea591419fb7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c4d2d0d19f542ac77777a15173b450c5e70d28bcf65b6078f6c9c765b8d68f8e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c4d2d0d19f542ac77777a15173b450c5e70d28bcf65b6078f6c9c765b8d68f8e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c4d2d0d19f542ac77777a15173b450c5e70d28bcf65b6078f6c9c765b8d68f8e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-935881",
	                "Source": "/var/lib/docker/volumes/newest-cni-935881/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-935881",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-935881",
	                "name.minikube.sigs.k8s.io": "newest-cni-935881",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "158ae7d37644d5c419f5d804700872b1a069199bf58eadaea9e8f9fd20f6eb89",
	            "SandboxKey": "/var/run/docker/netns/158ae7d37644",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33488"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33489"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33492"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33490"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33491"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-935881": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ce:6b:02:7d:83:d3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "aed12e141cccd598f03788b12670c986a047cef09028512d21b370f8b4816210",
	                    "EndpointID": "1cc90fb43b8945f2aa16fd4b43a6f6978afad521376a2afe8737b9552ca85ef9",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-935881",
	                        "e5afa4e1b83e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-935881 -n newest-cni-935881
helpers_test.go:244: <<< TestStartStop/group/newest-cni/serial/VerifyKubernetesImages FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/newest-cni/serial/VerifyKubernetesImages]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-935881 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-935881 logs -n 25: (1.38989688s)
helpers_test.go:252: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p embed-certs-584735                                  | embed-certs-584735           | jenkins | v1.35.0 | 19 Mar 25 19:19 UTC | 19 Mar 25 19:20 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-584735                 | embed-certs-584735           | jenkins | v1.35.0 | 19 Mar 25 19:20 UTC | 19 Mar 25 19:20 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-584735                                  | embed-certs-584735           | jenkins | v1.35.0 | 19 Mar 25 19:20 UTC | 19 Mar 25 19:24 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| image   | no-preload-863158 image list                           | no-preload-863158            | jenkins | v1.35.0 | 19 Mar 25 19:21 UTC | 19 Mar 25 19:21 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p no-preload-863158                                   | no-preload-863158            | jenkins | v1.35.0 | 19 Mar 25 19:21 UTC | 19 Mar 25 19:21 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-863158                                   | no-preload-863158            | jenkins | v1.35.0 | 19 Mar 25 19:21 UTC | 19 Mar 25 19:21 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-863158                                   | no-preload-863158            | jenkins | v1.35.0 | 19 Mar 25 19:21 UTC | 19 Mar 25 19:21 UTC |
	| delete  | -p no-preload-863158                                   | no-preload-863158            | jenkins | v1.35.0 | 19 Mar 25 19:21 UTC | 19 Mar 25 19:21 UTC |
	| delete  | -p                                                     | disable-driver-mounts-089369 | jenkins | v1.35.0 | 19 Mar 25 19:21 UTC | 19 Mar 25 19:21 UTC |
	|         | disable-driver-mounts-089369                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-303589 | jenkins | v1.35.0 | 19 Mar 25 19:21 UTC | 19 Mar 25 19:22 UTC |
	|         | default-k8s-diff-port-303589                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-303589  | default-k8s-diff-port-303589 | jenkins | v1.35.0 | 19 Mar 25 19:22 UTC | 19 Mar 25 19:22 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-303589 | jenkins | v1.35.0 | 19 Mar 25 19:22 UTC | 19 Mar 25 19:22 UTC |
	|         | default-k8s-diff-port-303589                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-303589       | default-k8s-diff-port-303589 | jenkins | v1.35.0 | 19 Mar 25 19:22 UTC | 19 Mar 25 19:22 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-303589 | jenkins | v1.35.0 | 19 Mar 25 19:22 UTC |                     |
	|         | default-k8s-diff-port-303589                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| image   | embed-certs-584735 image list                          | embed-certs-584735           | jenkins | v1.35.0 | 19 Mar 25 19:24 UTC | 19 Mar 25 19:24 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p embed-certs-584735                                  | embed-certs-584735           | jenkins | v1.35.0 | 19 Mar 25 19:24 UTC | 19 Mar 25 19:24 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-584735                                  | embed-certs-584735           | jenkins | v1.35.0 | 19 Mar 25 19:24 UTC | 19 Mar 25 19:24 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-584735                                  | embed-certs-584735           | jenkins | v1.35.0 | 19 Mar 25 19:24 UTC | 19 Mar 25 19:24 UTC |
	| delete  | -p embed-certs-584735                                  | embed-certs-584735           | jenkins | v1.35.0 | 19 Mar 25 19:24 UTC | 19 Mar 25 19:24 UTC |
	| start   | -p newest-cni-935881 --memory=2200 --alsologtostderr   | newest-cni-935881            | jenkins | v1.35.0 | 19 Mar 25 19:24 UTC | 19 Mar 25 19:25 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=crio              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-935881             | newest-cni-935881            | jenkins | v1.35.0 | 19 Mar 25 19:25 UTC | 19 Mar 25 19:25 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-935881                                   | newest-cni-935881            | jenkins | v1.35.0 | 19 Mar 25 19:25 UTC | 19 Mar 25 19:25 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-935881                  | newest-cni-935881            | jenkins | v1.35.0 | 19 Mar 25 19:25 UTC | 19 Mar 25 19:25 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-935881 --memory=2200 --alsologtostderr   | newest-cni-935881            | jenkins | v1.35.0 | 19 Mar 25 19:25 UTC | 19 Mar 25 19:25 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=crio              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| image   | newest-cni-935881 image list                           | newest-cni-935881            | jenkins | v1.35.0 | 19 Mar 25 19:25 UTC | 19 Mar 25 19:25 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/03/19 19:25:32
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0319 19:25:32.941052  673306 out.go:345] Setting OutFile to fd 1 ...
	I0319 19:25:32.941195  673306 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0319 19:25:32.941208  673306 out.go:358] Setting ErrFile to fd 2...
	I0319 19:25:32.941213  673306 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0319 19:25:32.941497  673306 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20544-448023/.minikube/bin
	I0319 19:25:32.942084  673306 out.go:352] Setting JSON to false
	I0319 19:25:32.943150  673306 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":11267,"bootTime":1742401066,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1077-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0319 19:25:32.943298  673306 start.go:139] virtualization:  
	I0319 19:25:32.948359  673306 out.go:177] * [newest-cni-935881] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0319 19:25:32.951301  673306 out.go:177]   - MINIKUBE_LOCATION=20544
	I0319 19:25:32.951374  673306 notify.go:220] Checking for updates...
	I0319 19:25:32.957065  673306 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0319 19:25:32.959988  673306 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20544-448023/kubeconfig
	I0319 19:25:32.962902  673306 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20544-448023/.minikube
	I0319 19:25:32.965584  673306 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0319 19:25:32.968390  673306 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0319 19:25:32.971852  673306 config.go:182] Loaded profile config "newest-cni-935881": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0319 19:25:32.972464  673306 driver.go:394] Setting default libvirt URI to qemu:///system
	I0319 19:25:32.999037  673306 docker.go:123] docker version: linux-28.0.2:Docker Engine - Community
	I0319 19:25:32.999209  673306 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0319 19:25:33.063681  673306 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-03-19 19:25:33.054306596 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
	I0319 19:25:33.063798  673306 docker.go:318] overlay module found
	I0319 19:25:33.067071  673306 out.go:177] * Using the docker driver based on existing profile
	I0319 19:25:33.069931  673306 start.go:297] selected driver: docker
	I0319 19:25:33.069952  673306 start.go:901] validating driver "docker" against &{Name:newest-cni-935881 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:newest-cni-935881 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extra
Disks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 19:25:33.070070  673306 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0319 19:25:33.070806  673306 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0319 19:25:33.139596  673306 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-03-19 19:25:33.130721121 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
	I0319 19:25:33.139957  673306 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0319 19:25:33.139996  673306 cni.go:84] Creating CNI manager for ""
	I0319 19:25:33.140071  673306 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0319 19:25:33.140109  673306 start.go:340] cluster config:
	{Name:newest-cni-935881 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:newest-cni-935881 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikub
e-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 19:25:33.143367  673306 out.go:177] * Starting "newest-cni-935881" primary control-plane node in "newest-cni-935881" cluster
	I0319 19:25:33.146250  673306 cache.go:121] Beginning downloading kic base image for docker with crio
	I0319 19:25:33.149179  673306 out.go:177] * Pulling base image v0.0.46-1741860993-20523 ...
	I0319 19:25:33.152016  673306 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0319 19:25:33.152073  673306 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20544-448023/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-arm64.tar.lz4
	I0319 19:25:33.152082  673306 cache.go:56] Caching tarball of preloaded images
	I0319 19:25:33.152116  673306 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon
	I0319 19:25:33.152165  673306 preload.go:172] Found /home/jenkins/minikube-integration/20544-448023/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0319 19:25:33.152177  673306 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0319 19:25:33.152297  673306 profile.go:143] Saving config to /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/newest-cni-935881/config.json ...
	I0319 19:25:33.172703  673306 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon, skipping pull
	I0319 19:25:33.172727  673306 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 exists in daemon, skipping load
	I0319 19:25:33.172745  673306 cache.go:230] Successfully downloaded all kic artifacts
	I0319 19:25:33.172767  673306 start.go:360] acquireMachinesLock for newest-cni-935881: {Name:mka58cb402688bf8804fa925c41c248b07873461 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0319 19:25:33.172830  673306 start.go:364] duration metric: took 40.255µs to acquireMachinesLock for "newest-cni-935881"
	I0319 19:25:33.172862  673306 start.go:96] Skipping create...Using existing machine configuration
	I0319 19:25:33.172871  673306 fix.go:54] fixHost starting: 
	I0319 19:25:33.173141  673306 cli_runner.go:164] Run: docker container inspect newest-cni-935881 --format={{.State.Status}}
	I0319 19:25:33.190374  673306 fix.go:112] recreateIfNeeded on newest-cni-935881: state=Stopped err=<nil>
	W0319 19:25:33.190406  673306 fix.go:138] unexpected machine state, will restart: <nil>
	I0319 19:25:33.193592  673306 out.go:177] * Restarting existing docker container for "newest-cni-935881" ...
	I0319 19:25:29.092386  666229 pod_ready.go:103] pod "metrics-server-f79f97bbb-d889p" in "kube-system" namespace has status "Ready":"False"
	I0319 19:25:31.592537  666229 pod_ready.go:103] pod "metrics-server-f79f97bbb-d889p" in "kube-system" namespace has status "Ready":"False"
	I0319 19:25:33.594864  666229 pod_ready.go:103] pod "metrics-server-f79f97bbb-d889p" in "kube-system" namespace has status "Ready":"False"
	I0319 19:25:33.196480  673306 cli_runner.go:164] Run: docker start newest-cni-935881
	I0319 19:25:33.474430  673306 cli_runner.go:164] Run: docker container inspect newest-cni-935881 --format={{.State.Status}}
	I0319 19:25:33.503652  673306 kic.go:430] container "newest-cni-935881" state is running.
	I0319 19:25:33.504043  673306 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-935881
	I0319 19:25:33.533646  673306 profile.go:143] Saving config to /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/newest-cni-935881/config.json ...
	I0319 19:25:33.533968  673306 machine.go:93] provisionDockerMachine start ...
	I0319 19:25:33.534051  673306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-935881
	I0319 19:25:33.553351  673306 main.go:141] libmachine: Using SSH client type: native
	I0319 19:25:33.553671  673306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil>  [] 0s} 127.0.0.1 33488 <nil> <nil>}
	I0319 19:25:33.553688  673306 main.go:141] libmachine: About to run SSH command:
	hostname
	I0319 19:25:33.554377  673306 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56598->127.0.0.1:33488: read: connection reset by peer
	I0319 19:25:36.681202  673306 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-935881
	
	I0319 19:25:36.681228  673306 ubuntu.go:169] provisioning hostname "newest-cni-935881"
	I0319 19:25:36.681334  673306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-935881
	I0319 19:25:36.700642  673306 main.go:141] libmachine: Using SSH client type: native
	I0319 19:25:36.700955  673306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil>  [] 0s} 127.0.0.1 33488 <nil> <nil>}
	I0319 19:25:36.700970  673306 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-935881 && echo "newest-cni-935881" | sudo tee /etc/hostname
	I0319 19:25:36.846654  673306 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-935881
	
	I0319 19:25:36.846730  673306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-935881
	I0319 19:25:36.868847  673306 main.go:141] libmachine: Using SSH client type: native
	I0319 19:25:36.869157  673306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil>  [] 0s} 127.0.0.1 33488 <nil> <nil>}
	I0319 19:25:36.869182  673306 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-935881' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-935881/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-935881' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0319 19:25:36.993812  673306 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0319 19:25:36.993838  673306 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20544-448023/.minikube CaCertPath:/home/jenkins/minikube-integration/20544-448023/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20544-448023/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20544-448023/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20544-448023/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20544-448023/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20544-448023/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20544-448023/.minikube}
	I0319 19:25:36.993878  673306 ubuntu.go:177] setting up certificates
	I0319 19:25:36.993888  673306 provision.go:84] configureAuth start
	I0319 19:25:36.993950  673306 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-935881
	I0319 19:25:37.014185  673306 provision.go:143] copyHostCerts
	I0319 19:25:37.014267  673306 exec_runner.go:144] found /home/jenkins/minikube-integration/20544-448023/.minikube/ca.pem, removing ...
	I0319 19:25:37.014285  673306 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20544-448023/.minikube/ca.pem
	I0319 19:25:37.014370  673306 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20544-448023/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20544-448023/.minikube/ca.pem (1082 bytes)
	I0319 19:25:37.014475  673306 exec_runner.go:144] found /home/jenkins/minikube-integration/20544-448023/.minikube/cert.pem, removing ...
	I0319 19:25:37.014480  673306 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20544-448023/.minikube/cert.pem
	I0319 19:25:37.014515  673306 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20544-448023/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20544-448023/.minikube/cert.pem (1123 bytes)
	I0319 19:25:37.014571  673306 exec_runner.go:144] found /home/jenkins/minikube-integration/20544-448023/.minikube/key.pem, removing ...
	I0319 19:25:37.014577  673306 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20544-448023/.minikube/key.pem
	I0319 19:25:37.014603  673306 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20544-448023/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20544-448023/.minikube/key.pem (1679 bytes)
	I0319 19:25:37.014652  673306 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20544-448023/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20544-448023/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20544-448023/.minikube/certs/ca-key.pem org=jenkins.newest-cni-935881 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-935881]
	I0319 19:25:37.542457  673306 provision.go:177] copyRemoteCerts
	I0319 19:25:37.542565  673306 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0319 19:25:37.542620  673306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-935881
	I0319 19:25:37.560956  673306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/20544-448023/.minikube/machines/newest-cni-935881/id_rsa Username:docker}
	I0319 19:25:37.651350  673306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20544-448023/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0319 19:25:37.678086  673306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20544-448023/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0319 19:25:37.703588  673306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20544-448023/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0319 19:25:37.730091  673306 provision.go:87] duration metric: took 736.189606ms to configureAuth
	I0319 19:25:37.730120  673306 ubuntu.go:193] setting minikube options for container-runtime
	I0319 19:25:37.730339  673306 config.go:182] Loaded profile config "newest-cni-935881": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0319 19:25:37.730518  673306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-935881
	I0319 19:25:37.748075  673306 main.go:141] libmachine: Using SSH client type: native
	I0319 19:25:37.748405  673306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil>  [] 0s} 127.0.0.1 33488 <nil> <nil>}
	I0319 19:25:37.748426  673306 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0319 19:25:36.091173  666229 pod_ready.go:103] pod "metrics-server-f79f97bbb-d889p" in "kube-system" namespace has status "Ready":"False"
	I0319 19:25:38.092489  666229 pod_ready.go:103] pod "metrics-server-f79f97bbb-d889p" in "kube-system" namespace has status "Ready":"False"
	I0319 19:25:38.072805  673306 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0319 19:25:38.072837  673306 machine.go:96] duration metric: took 4.538853558s to provisionDockerMachine
	I0319 19:25:38.072851  673306 start.go:293] postStartSetup for "newest-cni-935881" (driver="docker")
	I0319 19:25:38.072863  673306 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0319 19:25:38.072990  673306 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0319 19:25:38.073037  673306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-935881
	I0319 19:25:38.095326  673306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/20544-448023/.minikube/machines/newest-cni-935881/id_rsa Username:docker}
	I0319 19:25:38.187440  673306 ssh_runner.go:195] Run: cat /etc/os-release
	I0319 19:25:38.191200  673306 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0319 19:25:38.191231  673306 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0319 19:25:38.191242  673306 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0319 19:25:38.191250  673306 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0319 19:25:38.191271  673306 filesync.go:126] Scanning /home/jenkins/minikube-integration/20544-448023/.minikube/addons for local assets ...
	I0319 19:25:38.191335  673306 filesync.go:126] Scanning /home/jenkins/minikube-integration/20544-448023/.minikube/files for local assets ...
	I0319 19:25:38.191423  673306 filesync.go:149] local asset: /home/jenkins/minikube-integration/20544-448023/.minikube/files/etc/ssl/certs/4534112.pem -> 4534112.pem in /etc/ssl/certs
	I0319 19:25:38.191558  673306 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0319 19:25:38.204283  673306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20544-448023/.minikube/files/etc/ssl/certs/4534112.pem --> /etc/ssl/certs/4534112.pem (1708 bytes)
	I0319 19:25:38.232330  673306 start.go:296] duration metric: took 159.463024ms for postStartSetup
	I0319 19:25:38.232412  673306 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0319 19:25:38.232477  673306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-935881
	I0319 19:25:38.250706  673306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/20544-448023/.minikube/machines/newest-cni-935881/id_rsa Username:docker}
	I0319 19:25:38.346686  673306 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0319 19:25:38.351255  673306 fix.go:56] duration metric: took 5.178376493s for fixHost
	I0319 19:25:38.351284  673306 start.go:83] releasing machines lock for "newest-cni-935881", held for 5.178440724s
	I0319 19:25:38.351353  673306 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-935881
	I0319 19:25:38.368233  673306 ssh_runner.go:195] Run: cat /version.json
	I0319 19:25:38.368283  673306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-935881
	I0319 19:25:38.368323  673306 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0319 19:25:38.368378  673306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-935881
	I0319 19:25:38.386762  673306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/20544-448023/.minikube/machines/newest-cni-935881/id_rsa Username:docker}
	I0319 19:25:38.401468  673306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/20544-448023/.minikube/machines/newest-cni-935881/id_rsa Username:docker}
	I0319 19:25:38.473137  673306 ssh_runner.go:195] Run: systemctl --version
	I0319 19:25:38.610214  673306 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0319 19:25:38.756923  673306 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0319 19:25:38.761525  673306 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0319 19:25:38.770586  673306 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0319 19:25:38.770676  673306 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0319 19:25:38.779817  673306 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0319 19:25:38.779841  673306 start.go:495] detecting cgroup driver to use...
	I0319 19:25:38.779872  673306 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0319 19:25:38.779920  673306 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0319 19:25:38.792658  673306 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0319 19:25:38.805179  673306 docker.go:217] disabling cri-docker service (if available) ...
	I0319 19:25:38.805256  673306 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0319 19:25:38.819217  673306 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0319 19:25:38.832532  673306 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0319 19:25:38.931521  673306 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0319 19:25:39.047557  673306 docker.go:233] disabling docker service ...
	I0319 19:25:39.047633  673306 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0319 19:25:39.064681  673306 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0319 19:25:39.076657  673306 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0319 19:25:39.164298  673306 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0319 19:25:39.247425  673306 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0319 19:25:39.259983  673306 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0319 19:25:39.276634  673306 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0319 19:25:39.276713  673306 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:25:39.287028  673306 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0319 19:25:39.287169  673306 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:25:39.297599  673306 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:25:39.307547  673306 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:25:39.317362  673306 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0319 19:25:39.326665  673306 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:25:39.338181  673306 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:25:39.347825  673306 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0319 19:25:39.358459  673306 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0319 19:25:39.367542  673306 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0319 19:25:39.377507  673306 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 19:25:39.460890  673306 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0319 19:25:39.583851  673306 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0319 19:25:39.583938  673306 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0319 19:25:39.588420  673306 start.go:563] Will wait 60s for crictl version
	I0319 19:25:39.588507  673306 ssh_runner.go:195] Run: which crictl
	I0319 19:25:39.596938  673306 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0319 19:25:39.640248  673306 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0319 19:25:39.640334  673306 ssh_runner.go:195] Run: crio --version
	I0319 19:25:39.684195  673306 ssh_runner.go:195] Run: crio --version
	I0319 19:25:39.729843  673306 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.24.6 ...
	I0319 19:25:39.732805  673306 cli_runner.go:164] Run: docker network inspect newest-cni-935881 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0319 19:25:39.749739  673306 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0319 19:25:39.753477  673306 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 19:25:39.768151  673306 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0319 19:25:39.771105  673306 kubeadm.go:883] updating cluster {Name:newest-cni-935881 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:newest-cni-935881 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpir
ation:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0319 19:25:39.771272  673306 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0319 19:25:39.771377  673306 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 19:25:39.819704  673306 crio.go:514] all images are preloaded for cri-o runtime.
	I0319 19:25:39.819730  673306 crio.go:433] Images already preloaded, skipping extraction
	I0319 19:25:39.819789  673306 ssh_runner.go:195] Run: sudo crictl images --output json
	I0319 19:25:39.856853  673306 crio.go:514] all images are preloaded for cri-o runtime.
	I0319 19:25:39.856879  673306 cache_images.go:84] Images are preloaded, skipping loading
	I0319 19:25:39.856887  673306 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.32.2 crio true true} ...
	I0319 19:25:39.856987  673306 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-935881 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:newest-cni-935881 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0319 19:25:39.857069  673306 ssh_runner.go:195] Run: crio config
	I0319 19:25:39.914323  673306 cni.go:84] Creating CNI manager for ""
	I0319 19:25:39.914348  673306 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0319 19:25:39.914363  673306 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0319 19:25:39.914385  673306 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-935881 NodeName:newest-cni-935881 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0319 19:25:39.914514  673306 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-935881"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0319 19:25:39.914589  673306 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0319 19:25:39.923753  673306 binaries.go:44] Found k8s binaries, skipping transfer
	I0319 19:25:39.923840  673306 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0319 19:25:39.932684  673306 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0319 19:25:39.950883  673306 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0319 19:25:39.969236  673306 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I0319 19:25:39.987521  673306 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0319 19:25:39.991007  673306 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0319 19:25:40.001343  673306 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 19:25:40.098468  673306 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0319 19:25:40.114214  673306 certs.go:68] Setting up /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/newest-cni-935881 for IP: 192.168.76.2
	I0319 19:25:40.114304  673306 certs.go:194] generating shared ca certs ...
	I0319 19:25:40.114336  673306 certs.go:226] acquiring lock for ca certs: {Name:mkd8a6899d1e79d8873b3a9b4a64f23be9e68740 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 19:25:40.114533  673306 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20544-448023/.minikube/ca.key
	I0319 19:25:40.114620  673306 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20544-448023/.minikube/proxy-client-ca.key
	I0319 19:25:40.114648  673306 certs.go:256] generating profile certs ...
	I0319 19:25:40.114800  673306 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/newest-cni-935881/client.key
	I0319 19:25:40.114919  673306 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/newest-cni-935881/apiserver.key.74c2609b
	I0319 19:25:40.115016  673306 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/newest-cni-935881/proxy-client.key
	I0319 19:25:40.115196  673306 certs.go:484] found cert: /home/jenkins/minikube-integration/20544-448023/.minikube/certs/453411.pem (1338 bytes)
	W0319 19:25:40.115269  673306 certs.go:480] ignoring /home/jenkins/minikube-integration/20544-448023/.minikube/certs/453411_empty.pem, impossibly tiny 0 bytes
	I0319 19:25:40.115298  673306 certs.go:484] found cert: /home/jenkins/minikube-integration/20544-448023/.minikube/certs/ca-key.pem (1675 bytes)
	I0319 19:25:40.115368  673306 certs.go:484] found cert: /home/jenkins/minikube-integration/20544-448023/.minikube/certs/ca.pem (1082 bytes)
	I0319 19:25:40.115423  673306 certs.go:484] found cert: /home/jenkins/minikube-integration/20544-448023/.minikube/certs/cert.pem (1123 bytes)
	I0319 19:25:40.115492  673306 certs.go:484] found cert: /home/jenkins/minikube-integration/20544-448023/.minikube/certs/key.pem (1679 bytes)
	I0319 19:25:40.115583  673306 certs.go:484] found cert: /home/jenkins/minikube-integration/20544-448023/.minikube/files/etc/ssl/certs/4534112.pem (1708 bytes)
	I0319 19:25:40.116505  673306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20544-448023/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0319 19:25:40.152225  673306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20544-448023/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0319 19:25:40.184054  673306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20544-448023/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0319 19:25:40.238006  673306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20544-448023/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0319 19:25:40.286766  673306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/newest-cni-935881/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0319 19:25:40.323469  673306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/newest-cni-935881/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0319 19:25:40.350455  673306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/newest-cni-935881/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0319 19:25:40.377265  673306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/newest-cni-935881/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0319 19:25:40.403961  673306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20544-448023/.minikube/files/etc/ssl/certs/4534112.pem --> /usr/share/ca-certificates/4534112.pem (1708 bytes)
	I0319 19:25:40.428687  673306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20544-448023/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0319 19:25:40.454540  673306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20544-448023/.minikube/certs/453411.pem --> /usr/share/ca-certificates/453411.pem (1338 bytes)
	I0319 19:25:40.479713  673306 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0319 19:25:40.498251  673306 ssh_runner.go:195] Run: openssl version
	I0319 19:25:40.504008  673306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4534112.pem && ln -fs /usr/share/ca-certificates/4534112.pem /etc/ssl/certs/4534112.pem"
	I0319 19:25:40.513962  673306 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4534112.pem
	I0319 19:25:40.518378  673306 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 19 18:33 /usr/share/ca-certificates/4534112.pem
	I0319 19:25:40.518503  673306 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4534112.pem
	I0319 19:25:40.525736  673306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4534112.pem /etc/ssl/certs/3ec20f2e.0"
	I0319 19:25:40.535392  673306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0319 19:25:40.545621  673306 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0319 19:25:40.549560  673306 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 19 18:25 /usr/share/ca-certificates/minikubeCA.pem
	I0319 19:25:40.549657  673306 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0319 19:25:40.557597  673306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0319 19:25:40.567146  673306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/453411.pem && ln -fs /usr/share/ca-certificates/453411.pem /etc/ssl/certs/453411.pem"
	I0319 19:25:40.577870  673306 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/453411.pem
	I0319 19:25:40.581616  673306 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 19 18:33 /usr/share/ca-certificates/453411.pem
	I0319 19:25:40.581692  673306 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/453411.pem
	I0319 19:25:40.589373  673306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/453411.pem /etc/ssl/certs/51391683.0"
	I0319 19:25:40.600254  673306 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0319 19:25:40.604261  673306 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0319 19:25:40.612327  673306 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0319 19:25:40.619834  673306 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0319 19:25:40.627212  673306 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0319 19:25:40.634658  673306 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0319 19:25:40.642058  673306 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0319 19:25:40.649182  673306 kubeadm.go:392] StartCluster: {Name:newest-cni-935881 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:newest-cni-935881 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 19:25:40.649326  673306 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0319 19:25:40.649385  673306 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0319 19:25:40.686976  673306 cri.go:89] found id: ""
	I0319 19:25:40.687048  673306 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0319 19:25:40.696219  673306 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0319 19:25:40.696248  673306 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0319 19:25:40.696300  673306 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0319 19:25:40.705973  673306 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0319 19:25:40.706758  673306 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-935881" does not appear in /home/jenkins/minikube-integration/20544-448023/kubeconfig
	I0319 19:25:40.707283  673306 kubeconfig.go:62] /home/jenkins/minikube-integration/20544-448023/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-935881" cluster setting kubeconfig missing "newest-cni-935881" context setting]
	I0319 19:25:40.708410  673306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20544-448023/kubeconfig: {Name:mk54867cb0e9cc74fa0dd9ec986d9fb8d5ff5dff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 19:25:40.714994  673306 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0319 19:25:40.729501  673306 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.76.2
	I0319 19:25:40.729550  673306 kubeadm.go:597] duration metric: took 33.295212ms to restartPrimaryControlPlane
	I0319 19:25:40.729560  673306 kubeadm.go:394] duration metric: took 80.388504ms to StartCluster
	I0319 19:25:40.729575  673306 settings.go:142] acquiring lock: {Name:mk7bcf22d5090743d25ff681e3c908a88736d42a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 19:25:40.729647  673306 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20544-448023/kubeconfig
	I0319 19:25:40.730593  673306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20544-448023/kubeconfig: {Name:mk54867cb0e9cc74fa0dd9ec986d9fb8d5ff5dff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 19:25:40.730778  673306 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0319 19:25:40.731065  673306 config.go:182] Loaded profile config "newest-cni-935881": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0319 19:25:40.731114  673306 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0319 19:25:40.731178  673306 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-935881"
	I0319 19:25:40.731195  673306 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-935881"
	W0319 19:25:40.731201  673306 addons.go:247] addon storage-provisioner should already be in state true
	I0319 19:25:40.731226  673306 host.go:66] Checking if "newest-cni-935881" exists ...
	I0319 19:25:40.731966  673306 cli_runner.go:164] Run: docker container inspect newest-cni-935881 --format={{.State.Status}}
	I0319 19:25:40.735659  673306 out.go:177] * Verifying Kubernetes components...
	I0319 19:25:40.739254  673306 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0319 19:25:40.740823  673306 addons.go:69] Setting dashboard=true in profile "newest-cni-935881"
	I0319 19:25:40.740840  673306 addons.go:69] Setting metrics-server=true in profile "newest-cni-935881"
	I0319 19:25:40.740846  673306 addons.go:238] Setting addon dashboard=true in "newest-cni-935881"
	W0319 19:25:40.740854  673306 addons.go:247] addon dashboard should already be in state true
	I0319 19:25:40.740868  673306 addons.go:238] Setting addon metrics-server=true in "newest-cni-935881"
	W0319 19:25:40.740875  673306 addons.go:247] addon metrics-server should already be in state true
	I0319 19:25:40.740894  673306 host.go:66] Checking if "newest-cni-935881" exists ...
	I0319 19:25:40.740897  673306 host.go:66] Checking if "newest-cni-935881" exists ...
	I0319 19:25:40.741350  673306 cli_runner.go:164] Run: docker container inspect newest-cni-935881 --format={{.State.Status}}
	I0319 19:25:40.741355  673306 cli_runner.go:164] Run: docker container inspect newest-cni-935881 --format={{.State.Status}}
	I0319 19:25:40.744205  673306 addons.go:69] Setting default-storageclass=true in profile "newest-cni-935881"
	I0319 19:25:40.744235  673306 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-935881"
	I0319 19:25:40.744572  673306 cli_runner.go:164] Run: docker container inspect newest-cni-935881 --format={{.State.Status}}
	I0319 19:25:40.793883  673306 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0319 19:25:40.799865  673306 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0319 19:25:40.799889  673306 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0319 19:25:40.799996  673306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-935881
	I0319 19:25:40.819067  673306 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0319 19:25:40.825380  673306 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0319 19:25:40.825412  673306 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0319 19:25:40.825485  673306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-935881
	I0319 19:25:40.826410  673306 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0319 19:25:40.833805  673306 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0319 19:25:40.838333  673306 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0319 19:25:40.838390  673306 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0319 19:25:40.838471  673306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-935881
	I0319 19:25:40.849957  673306 addons.go:238] Setting addon default-storageclass=true in "newest-cni-935881"
	W0319 19:25:40.849979  673306 addons.go:247] addon default-storageclass should already be in state true
	I0319 19:25:40.850004  673306 host.go:66] Checking if "newest-cni-935881" exists ...
	I0319 19:25:40.850438  673306 cli_runner.go:164] Run: docker container inspect newest-cni-935881 --format={{.State.Status}}
	I0319 19:25:40.877964  673306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/20544-448023/.minikube/machines/newest-cni-935881/id_rsa Username:docker}
	I0319 19:25:40.891763  673306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/20544-448023/.minikube/machines/newest-cni-935881/id_rsa Username:docker}
	I0319 19:25:40.915539  673306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/20544-448023/.minikube/machines/newest-cni-935881/id_rsa Username:docker}
	I0319 19:25:40.926862  673306 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0319 19:25:40.926897  673306 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0319 19:25:40.926990  673306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-935881
	I0319 19:25:40.960896  673306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/20544-448023/.minikube/machines/newest-cni-935881/id_rsa Username:docker}
	I0319 19:25:41.163896  673306 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0319 19:25:41.163975  673306 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0319 19:25:41.165339  673306 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0319 19:25:41.171267  673306 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0319 19:25:41.171338  673306 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0319 19:25:41.173710  673306 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0319 19:25:41.228495  673306 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0319 19:25:41.228522  673306 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0319 19:25:41.239768  673306 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0319 19:25:41.239795  673306 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0319 19:25:41.245420  673306 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0319 19:25:41.305725  673306 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0319 19:25:41.305752  673306 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0319 19:25:41.318290  673306 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0319 19:25:41.318316  673306 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0319 19:25:41.351611  673306 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0319 19:25:41.351637  673306 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0319 19:25:41.409404  673306 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0319 19:25:41.446647  673306 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0319 19:25:41.446673  673306 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0319 19:25:41.520875  673306 api_server.go:52] waiting for apiserver process to appear ...
	I0319 19:25:41.520954  673306 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0319 19:25:41.521057  673306 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0319 19:25:41.521095  673306 retry.go:31] will retry after 148.314195ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0319 19:25:41.521136  673306 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0319 19:25:41.521149  673306 retry.go:31] will retry after 310.470629ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0319 19:25:41.524700  673306 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0319 19:25:41.524738  673306 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0319 19:25:41.590926  673306 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0319 19:25:41.590956  673306 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0319 19:25:41.643540  673306 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0319 19:25:41.643565  673306 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	W0319 19:25:41.650059  673306 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0319 19:25:41.650160  673306 retry.go:31] will retry after 162.917241ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0319 19:25:41.650246  673306 api_server.go:72] duration metric: took 919.437331ms to wait for apiserver process to appear ...
	I0319 19:25:41.650273  673306 api_server.go:88] waiting for apiserver healthz status ...
	I0319 19:25:41.650320  673306 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0319 19:25:41.650687  673306 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0319 19:25:41.670042  673306 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0319 19:25:41.679399  673306 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0319 19:25:41.679477  673306 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0319 19:25:41.710582  673306 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0319 19:25:41.814033  673306 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0319 19:25:41.832360  673306 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0319 19:25:42.150886  673306 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0319 19:25:40.093637  666229 pod_ready.go:103] pod "metrics-server-f79f97bbb-d889p" in "kube-system" namespace has status "Ready":"False"
	I0319 19:25:42.591271  666229 pod_ready.go:103] pod "metrics-server-f79f97bbb-d889p" in "kube-system" namespace has status "Ready":"False"
	I0319 19:25:46.531758  673306 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0319 19:25:46.531795  673306 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0319 19:25:46.531809  673306 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0319 19:25:47.029563  673306 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0319 19:25:47.029600  673306 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0319 19:25:47.029618  673306 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0319 19:25:47.050797  673306 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (5.380660686s)
	I0319 19:25:47.062562  673306 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0319 19:25:47.062595  673306 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0319 19:25:47.152890  673306 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0319 19:25:47.294556  673306 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0319 19:25:47.294592  673306 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0319 19:25:47.650950  673306 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0319 19:25:47.683539  673306 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0319 19:25:47.683579  673306 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0319 19:25:48.151285  673306 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0319 19:25:48.164502  673306 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0319 19:25:48.164548  673306 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0319 19:25:48.417813  673306 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.707029116s)
	I0319 19:25:48.417868  673306 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.603673371s)
	I0319 19:25:48.417880  673306 addons.go:479] Verifying addon metrics-server=true in "newest-cni-935881"
	I0319 19:25:48.418040  673306 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.585596234s)
	I0319 19:25:48.421061  673306 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-935881 addons enable metrics-server
	
	I0319 19:25:48.424079  673306 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner, dashboard
	I0319 19:25:45.092991  666229 pod_ready.go:103] pod "metrics-server-f79f97bbb-d889p" in "kube-system" namespace has status "Ready":"False"
	I0319 19:25:47.591266  666229 pod_ready.go:103] pod "metrics-server-f79f97bbb-d889p" in "kube-system" namespace has status "Ready":"False"
	I0319 19:25:48.427223  673306 addons.go:514] duration metric: took 7.696102658s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner dashboard]
	I0319 19:25:48.651213  673306 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0319 19:25:48.662004  673306 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0319 19:25:48.662041  673306 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0319 19:25:49.150618  673306 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0319 19:25:49.165471  673306 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0319 19:25:49.167051  673306 api_server.go:141] control plane version: v1.32.2
	I0319 19:25:49.167131  673306 api_server.go:131] duration metric: took 7.516823312s to wait for apiserver health ...
	I0319 19:25:49.167153  673306 system_pods.go:43] waiting for kube-system pods to appear ...
	I0319 19:25:49.175912  673306 system_pods.go:59] 9 kube-system pods found
	I0319 19:25:49.176026  673306 system_pods.go:61] "coredns-668d6bf9bc-zjltz" [f33bd50a-e83e-4574-8bcf-8ca3c716add8] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0319 19:25:49.176085  673306 system_pods.go:61] "etcd-newest-cni-935881" [485302b9-7e6b-4075-9791-0d1d0eb59d33] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0319 19:25:49.176126  673306 system_pods.go:61] "kindnet-m8hgm" [22af2f85-1811-454c-ae2a-6a04d7339cb0] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0319 19:25:49.176173  673306 system_pods.go:61] "kube-apiserver-newest-cni-935881" [e91a32b0-5acd-446c-bb84-06109d5ea562] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0319 19:25:49.176219  673306 system_pods.go:61] "kube-controller-manager-newest-cni-935881" [1836a916-6e83-4f69-8eb9-5bf30ef0c59b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0319 19:25:49.176263  673306 system_pods.go:61] "kube-proxy-584zk" [a4429ea9-baaf-41c2-b5be-5dadc0ade822] Running
	I0319 19:25:49.176296  673306 system_pods.go:61] "kube-scheduler-newest-cni-935881" [f035325f-80b6-4fc9-9cb6-b318ff326c45] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0319 19:25:49.176318  673306 system_pods.go:61] "metrics-server-f79f97bbb-9s77l" [7acb3cdd-a58a-4b6d-bcfe-4a9286a16e0a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0319 19:25:49.176356  673306 system_pods.go:61] "storage-provisioner" [3acd5575-428f-4082-8755-dbbd1f1b2d8c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0319 19:25:49.176390  673306 system_pods.go:74] duration metric: took 9.216103ms to wait for pod list to return data ...
	I0319 19:25:49.176435  673306 default_sa.go:34] waiting for default service account to be created ...
	I0319 19:25:49.179644  673306 default_sa.go:45] found service account: "default"
	I0319 19:25:49.179740  673306 default_sa.go:55] duration metric: took 3.281873ms for default service account to be created ...
	I0319 19:25:49.179780  673306 kubeadm.go:582] duration metric: took 8.448966006s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0319 19:25:49.179861  673306 node_conditions.go:102] verifying NodePressure condition ...
	I0319 19:25:49.183296  673306 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0319 19:25:49.183333  673306 node_conditions.go:123] node cpu capacity is 2
	I0319 19:25:49.183346  673306 node_conditions.go:105] duration metric: took 3.43833ms to run NodePressure ...
	I0319 19:25:49.183361  673306 start.go:241] waiting for startup goroutines ...
	I0319 19:25:49.183369  673306 start.go:246] waiting for cluster config update ...
	I0319 19:25:49.183381  673306 start.go:255] writing updated cluster config ...
	I0319 19:25:49.183667  673306 ssh_runner.go:195] Run: rm -f paused
	I0319 19:25:49.289282  673306 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0319 19:25:49.296421  673306 out.go:177] * Done! kubectl is now configured to use "newest-cni-935881" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Mar 19 19:25:47 newest-cni-935881 crio[535]: time="2025-03-19 19:25:47.611358956Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.32.2" id=8df9eab5-d2d4-4824-86dc-9f0daf3a2415 name=/runtime.v1.ImageService/ImageStatus
	Mar 19 19:25:47 newest-cni-935881 crio[535]: time="2025-03-19 19:25:47.611587864Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062,RepoTags:[registry.k8s.io/kube-proxy:v1.32.2],RepoDigests:[registry.k8s.io/kube-proxy@sha256:6b93583f4856ea0923c6fffd91c802a2362511378390acc6e539a419210ee23b registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d],Size_:98313623,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=8df9eab5-d2d4-4824-86dc-9f0daf3a2415 name=/runtime.v1.ImageService/ImageStatus
	Mar 19 19:25:47 newest-cni-935881 crio[535]: time="2025-03-19 19:25:47.612525831Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.32.2" id=4fd8d53b-fe88-4744-8e26-b3119c08485c name=/runtime.v1.ImageService/ImageStatus
	Mar 19 19:25:47 newest-cni-935881 crio[535]: time="2025-03-19 19:25:47.612712925Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062,RepoTags:[registry.k8s.io/kube-proxy:v1.32.2],RepoDigests:[registry.k8s.io/kube-proxy@sha256:6b93583f4856ea0923c6fffd91c802a2362511378390acc6e539a419210ee23b registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d],Size_:98313623,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=4fd8d53b-fe88-4744-8e26-b3119c08485c name=/runtime.v1.ImageService/ImageStatus
	Mar 19 19:25:47 newest-cni-935881 crio[535]: time="2025-03-19 19:25:47.613414862Z" level=info msg="Creating container: kube-system/kube-proxy-584zk/kube-proxy" id=41915364-5740-4272-aa66-88ec4f5e6744 name=/runtime.v1.RuntimeService/CreateContainer
	Mar 19 19:25:47 newest-cni-935881 crio[535]: time="2025-03-19 19:25:47.613505496Z" level=warning msg="Allowed annotations are specified for workload []"
	Mar 19 19:25:47 newest-cni-935881 crio[535]: time="2025-03-19 19:25:47.631410529Z" level=info msg="Ran pod sandbox 76bdd733361738fff8a172bc04d0fd79fa200bcc559a83ef8d55c758eb9e8fde with infra container: kube-system/kindnet-m8hgm/POD" id=a4ce5a3c-a8c2-49a9-93b6-759a152494eb name=/runtime.v1.RuntimeService/RunPodSandbox
	Mar 19 19:25:47 newest-cni-935881 crio[535]: time="2025-03-19 19:25:47.632750976Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=bd962da2-fadb-453f-a6c4-e4eeb8c0f702 name=/runtime.v1.ImageService/ImageStatus
	Mar 19 19:25:47 newest-cni-935881 crio[535]: time="2025-03-19 19:25:47.633285576Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=bd962da2-fadb-453f-a6c4-e4eeb8c0f702 name=/runtime.v1.ImageService/ImageStatus
	Mar 19 19:25:47 newest-cni-935881 crio[535]: time="2025-03-19 19:25:47.634320790Z" level=info msg="Pulling image: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=daf28243-9c83-459a-a288-2fc75d0691ba name=/runtime.v1.ImageService/PullImage
	Mar 19 19:25:47 newest-cni-935881 crio[535]: time="2025-03-19 19:25:47.640657771Z" level=info msg="Trying to access \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	Mar 19 19:25:47 newest-cni-935881 crio[535]: time="2025-03-19 19:25:47.950224506Z" level=info msg="Trying to access \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	Mar 19 19:25:48 newest-cni-935881 crio[535]: time="2025-03-19 19:25:48.013276743Z" level=info msg="Created container 4030d6a84ab179cef677663b6f61dae1bf9476cf0898ba9a295aef6eff60efd7: kube-system/kube-proxy-584zk/kube-proxy" id=41915364-5740-4272-aa66-88ec4f5e6744 name=/runtime.v1.RuntimeService/CreateContainer
	Mar 19 19:25:48 newest-cni-935881 crio[535]: time="2025-03-19 19:25:48.015277222Z" level=info msg="Starting container: 4030d6a84ab179cef677663b6f61dae1bf9476cf0898ba9a295aef6eff60efd7" id=3aea95d1-dbb7-49f1-aecc-731629b06d1c name=/runtime.v1.RuntimeService/StartContainer
	Mar 19 19:25:48 newest-cni-935881 crio[535]: time="2025-03-19 19:25:48.059433430Z" level=info msg="Started container" PID=1094 containerID=4030d6a84ab179cef677663b6f61dae1bf9476cf0898ba9a295aef6eff60efd7 description=kube-system/kube-proxy-584zk/kube-proxy id=3aea95d1-dbb7-49f1-aecc-731629b06d1c name=/runtime.v1.RuntimeService/StartContainer sandboxID=afb0d5abf6a9872f10cd6399337378d6afd2cbd2e7193a5faa2fc7b4f3f66e43
	Mar 19 19:25:50 newest-cni-935881 crio[535]: time="2025-03-19 19:25:50.209409885Z" level=info msg="Pulled image: docker.io/kindest/kindnetd@sha256:86c933f3845d6a993c8f64632752b10aae67a4756c59096b3259426e839be955" id=daf28243-9c83-459a-a288-2fc75d0691ba name=/runtime.v1.ImageService/PullImage
	Mar 19 19:25:50 newest-cni-935881 crio[535]: time="2025-03-19 19:25:50.225350598Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=ed4e614e-2144-46ac-9154-34bf13b54344 name=/runtime.v1.ImageService/ImageStatus
	Mar 19 19:25:50 newest-cni-935881 crio[535]: time="2025-03-19 19:25:50.227800483Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ee75e27fff91c8d59835f9a3efdf968ff404e580bad69746a65bcf3e304ab26f,RepoTags:[docker.io/kindest/kindnetd:v20250214-acbabc1a],RepoDigests:[docker.io/kindest/kindnetd@sha256:86c933f3845d6a993c8f64632752b10aae67a4756c59096b3259426e839be955 docker.io/kindest/kindnetd@sha256:e3c42406b0806c1f7e8a66838377936cbd2cdfd94d9b26a3eefedada8713d495],Size_:99018290,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=ed4e614e-2144-46ac-9154-34bf13b54344 name=/runtime.v1.ImageService/ImageStatus
	Mar 19 19:25:50 newest-cni-935881 crio[535]: time="2025-03-19 19:25:50.231999236Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=9532e7bb-f098-45b0-b29a-708719c07fd8 name=/runtime.v1.ImageService/ImageStatus
	Mar 19 19:25:50 newest-cni-935881 crio[535]: time="2025-03-19 19:25:50.234935147Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ee75e27fff91c8d59835f9a3efdf968ff404e580bad69746a65bcf3e304ab26f,RepoTags:[docker.io/kindest/kindnetd:v20250214-acbabc1a],RepoDigests:[docker.io/kindest/kindnetd@sha256:86c933f3845d6a993c8f64632752b10aae67a4756c59096b3259426e839be955 docker.io/kindest/kindnetd@sha256:e3c42406b0806c1f7e8a66838377936cbd2cdfd94d9b26a3eefedada8713d495],Size_:99018290,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=9532e7bb-f098-45b0-b29a-708719c07fd8 name=/runtime.v1.ImageService/ImageStatus
	Mar 19 19:25:50 newest-cni-935881 crio[535]: time="2025-03-19 19:25:50.243497577Z" level=info msg="Creating container: kube-system/kindnet-m8hgm/kindnet-cni" id=e8b7a54b-f246-4262-9431-81b206395efa name=/runtime.v1.RuntimeService/CreateContainer
	Mar 19 19:25:50 newest-cni-935881 crio[535]: time="2025-03-19 19:25:50.243876902Z" level=warning msg="Allowed annotations are specified for workload []"
	Mar 19 19:25:50 newest-cni-935881 crio[535]: time="2025-03-19 19:25:50.429867719Z" level=info msg="Created container 573cad1eef517e91503f8b54831ea8f8938cd51e4e9905069e3500cadd0f31c9: kube-system/kindnet-m8hgm/kindnet-cni" id=e8b7a54b-f246-4262-9431-81b206395efa name=/runtime.v1.RuntimeService/CreateContainer
	Mar 19 19:25:50 newest-cni-935881 crio[535]: time="2025-03-19 19:25:50.431388474Z" level=info msg="Starting container: 573cad1eef517e91503f8b54831ea8f8938cd51e4e9905069e3500cadd0f31c9" id=03317b14-8300-4855-b5be-8738347c5a47 name=/runtime.v1.RuntimeService/StartContainer
	Mar 19 19:25:50 newest-cni-935881 crio[535]: time="2025-03-19 19:25:50.454646334Z" level=info msg="Started container" PID=1307 containerID=573cad1eef517e91503f8b54831ea8f8938cd51e4e9905069e3500cadd0f31c9 description=kube-system/kindnet-m8hgm/kindnet-cni id=03317b14-8300-4855-b5be-8738347c5a47 name=/runtime.v1.RuntimeService/StartContainer sandboxID=76bdd733361738fff8a172bc04d0fd79fa200bcc559a83ef8d55c758eb9e8fde
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	573cad1eef517       docker.io/kindest/kindnetd@sha256:86c933f3845d6a993c8f64632752b10aae67a4756c59096b3259426e839be955   1 second ago        Running             kindnet-cni               0                   76bdd73336173       kindnet-m8hgm
	4030d6a84ab17       e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062                                     3 seconds ago       Running             kube-proxy                1                   afb0d5abf6a98       kube-proxy-584zk
	748922407c6fa       6417e1437b6d9a789e1ca789695a574e1df00a632bdbfbcae9695c9a7d500e32                                     10 seconds ago      Running             kube-apiserver            1                   0b3dbf900e430       kube-apiserver-newest-cni-935881
	d3ff5352dfd69       7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82                                     10 seconds ago      Running             etcd                      1                   f78f74bf7e3f1       etcd-newest-cni-935881
	48df778850d4c       82dfa03f692fb5d84f66c17d6ee9126b081182152b25d28ea456d89b7d5d8911                                     10 seconds ago      Running             kube-scheduler            1                   9f9d5bba2a7d0       kube-scheduler-newest-cni-935881
	d55f70ac6893f       3c9285acfd2ff7915bb451cc40ac060366ac519f3fef00c455f5aca0e0346c4d                                     10 seconds ago      Running             kube-controller-manager   1                   2ecea60f99087       kube-controller-manager-newest-cni-935881
	
	
	==> describe nodes <==
	Name:               newest-cni-935881
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-935881
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d76a625434f413a89ad1bb610dea10300ea9201f
	                    minikube.k8s.io/name=newest-cni-935881
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_03_19T19_25_25_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Mar 2025 19:25:21 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-935881
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Mar 2025 19:25:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Mar 2025 19:25:47 +0000   Wed, 19 Mar 2025 19:25:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Mar 2025 19:25:47 +0000   Wed, 19 Mar 2025 19:25:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Mar 2025 19:25:47 +0000   Wed, 19 Mar 2025 19:25:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 19 Mar 2025 19:25:47 +0000   Wed, 19 Mar 2025 19:25:18 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-935881
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 dbead3f4fb79443dbed5b2ab0d3d8ab2
	  System UUID:                5e5c427b-3b13-4ba4-bd23-7c673c93d4cb
	  Boot ID:                    48f0ca68-a8da-47aa-b0f9-4e2bea015ace
	  Kernel Version:             5.15.0-1077-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-935881                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         27s
	  kube-system                 kindnet-m8hgm                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      23s
	  kube-system                 kube-apiserver-newest-cni-935881             250m (12%)    0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-controller-manager-newest-cni-935881    200m (10%)    0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-proxy-584zk                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	  kube-system                 kube-scheduler-newest-cni-935881             100m (5%)     0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 21s                kube-proxy       
	  Normal   Starting                 3s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  34s (x8 over 34s)  kubelet          Node newest-cni-935881 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    34s (x8 over 34s)  kubelet          Node newest-cni-935881 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     34s (x8 over 34s)  kubelet          Node newest-cni-935881 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    27s                kubelet          Node newest-cni-935881 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 27s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  27s                kubelet          Node newest-cni-935881 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     27s                kubelet          Node newest-cni-935881 status is now: NodeHasSufficientPID
	  Normal   Starting                 27s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           24s                node-controller  Node newest-cni-935881 event: Registered Node newest-cni-935881 in Controller
	  Normal   Starting                 11s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 11s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  11s (x8 over 11s)  kubelet          Node newest-cni-935881 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11s (x8 over 11s)  kubelet          Node newest-cni-935881 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11s (x8 over 11s)  kubelet          Node newest-cni-935881 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           1s                 node-controller  Node newest-cni-935881 event: Registered Node newest-cni-935881 in Controller
	
	
	==> dmesg <==
	[Mar19 19:17] hrtimer: interrupt took 38532451 ns
	
	
	==> etcd [d3ff5352dfd6935accb6b81f5a0e5b57916eb4cb7f3240852b9d0e579778d651] <==
	{"level":"info","ts":"2025-03-19T19:25:42.005653Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-03-19T19:25:42.007375Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-03-19T19:25:42.008067Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-03-19T19:25:42.018360Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-03-19T19:25:42.024436Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-03-19T19:25:42.036015Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-03-19T19:25:42.031492Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-03-19T19:25:42.036756Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-03-19T19:25:42.041836Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-03-19T19:25:43.587467Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-03-19T19:25:43.587616Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-03-19T19:25:43.587689Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-03-19T19:25:43.587731Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-03-19T19:25:43.590999Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-03-19T19:25:43.591153Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-03-19T19:25:43.591198Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-03-19T19:25:43.594003Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:newest-cni-935881 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-03-19T19:25:43.596169Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-03-19T19:25:43.596621Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-03-19T19:25:43.597350Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-03-19T19:25:43.598364Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-03-19T19:25:43.600296Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-03-19T19:25:43.600452Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-03-19T19:25:43.600499Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-03-19T19:25:43.600607Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 19:25:51 up  3:08,  0 users,  load average: 2.64, 1.91, 2.35
	Linux newest-cni-935881 5.15.0-1077-aws #84~20.04.1-Ubuntu SMP Mon Jan 20 22:14:27 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [573cad1eef517e91503f8b54831ea8f8938cd51e4e9905069e3500cadd0f31c9] <==
	I0319 19:25:50.532242       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0319 19:25:50.532522       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0319 19:25:50.532669       1 main.go:148] setting mtu 1500 for CNI 
	I0319 19:25:50.532690       1 main.go:178] kindnetd IP family: "ipv4"
	I0319 19:25:50.532701       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0319 19:25:50.930811       1 controller.go:361] Starting controller kube-network-policies
	I0319 19:25:50.930906       1 controller.go:365] Waiting for informer caches to sync
	I0319 19:25:50.930937       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I0319 19:25:51.131194       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0319 19:25:51.131218       1 metrics.go:61] Registering metrics
	I0319 19:25:51.131255       1 controller.go:401] Syncing nftables rules
	
	
	==> kube-apiserver [748922407c6fa239c769703e155c135e770da6920dfd1662cdf05c33cdd59854] <==
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0319 19:25:47.339560       1 cache.go:39] Caches are synced for autoregister controller
	I0319 19:25:47.411297       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0319 19:25:47.416282       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0319 19:25:47.517262       1 controller.go:615] quota admission added evaluator for: deployments.apps
	E0319 19:25:47.563037       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0319 19:25:47.611818       1 controller.go:615] quota admission added evaluator for: namespaces
	I0319 19:25:48.127620       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0319 19:25:48.147768       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0319 19:25:48.270384       1 handler_proxy.go:99] no RequestInfo found in the context
	E0319 19:25:48.270425       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0319 19:25:48.270903       1 handler_proxy.go:99] no RequestInfo found in the context
	E0319 19:25:48.270977       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0319 19:25:48.272910       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0319 19:25:48.275642       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0319 19:25:48.338113       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.108.234.191"}
	I0319 19:25:48.410085       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.127.187"}
	I0319 19:25:50.175721       1 controller.go:615] quota admission added evaluator for: endpoints
	I0319 19:25:50.308309       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0319 19:25:50.620656       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0319 19:25:50.673269       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [d55f70ac6893fb13fca3148167c35ff1ce39e99096fd6a38aea3fb7924642b03] <==
	I0319 19:25:50.203714       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0319 19:25:50.203736       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0319 19:25:50.203741       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0319 19:25:50.203747       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0319 19:25:50.203808       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="newest-cni-935881"
	I0319 19:25:50.203825       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0319 19:25:50.219278       1 shared_informer.go:320] Caches are synced for service account
	I0319 19:25:50.220346       1 shared_informer.go:320] Caches are synced for daemon sets
	I0319 19:25:50.228826       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0319 19:25:50.244023       1 shared_informer.go:320] Caches are synced for namespace
	I0319 19:25:50.268600       1 shared_informer.go:320] Caches are synced for resource quota
	I0319 19:25:50.268669       1 shared_informer.go:320] Caches are synced for garbage collector
	I0319 19:25:50.268679       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0319 19:25:50.268687       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0319 19:25:50.326040       1 shared_informer.go:320] Caches are synced for crt configmap
	I0319 19:25:50.343582       1 shared_informer.go:320] Caches are synced for garbage collector
	I0319 19:25:50.343954       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0319 19:25:50.358573       1 shared_informer.go:320] Caches are synced for resource quota
	I0319 19:25:50.790431       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="471.521281ms"
	I0319 19:25:50.802517       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="475.928584ms"
	I0319 19:25:50.827787       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="25.221269ms"
	I0319 19:25:50.827947       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="48.591µs"
	I0319 19:25:50.850353       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="40.214µs"
	I0319 19:25:50.855327       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="64.846282ms"
	I0319 19:25:50.856010       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="517.977µs"
	
	
	==> kube-proxy [4030d6a84ab179cef677663b6f61dae1bf9476cf0898ba9a295aef6eff60efd7] <==
	I0319 19:25:48.335233       1 server_linux.go:66] "Using iptables proxy"
	I0319 19:25:48.503286       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.76.2"]
	E0319 19:25:48.507827       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0319 19:25:48.676244       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0319 19:25:48.676370       1 server_linux.go:170] "Using iptables Proxier"
	I0319 19:25:48.678603       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0319 19:25:48.679006       1 server.go:497] "Version info" version="v1.32.2"
	I0319 19:25:48.679196       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0319 19:25:48.680461       1 config.go:199] "Starting service config controller"
	I0319 19:25:48.680540       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0319 19:25:48.680593       1 config.go:105] "Starting endpoint slice config controller"
	I0319 19:25:48.680621       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0319 19:25:48.681147       1 config.go:329] "Starting node config controller"
	I0319 19:25:48.683552       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0319 19:25:48.780853       1 shared_informer.go:320] Caches are synced for service config
	I0319 19:25:48.780894       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0319 19:25:48.801758       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [48df778850d4ca2ee9a2888524a833a77b75ede1aeca622bc14ada6d44c99565] <==
	I0319 19:25:43.647937       1 serving.go:386] Generated self-signed cert in-memory
	I0319 19:25:47.337706       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.2"
	I0319 19:25:47.337934       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0319 19:25:47.398813       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0319 19:25:47.399149       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0319 19:25:47.399248       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0319 19:25:47.399338       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0319 19:25:47.407712       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0319 19:25:47.408224       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0319 19:25:47.407850       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0319 19:25:47.408570       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0319 19:25:47.519595       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0319 19:25:47.612947       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0319 19:25:47.613001       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 19 19:25:46 newest-cni-935881 kubelet[633]: I0319 19:25:46.550818     633 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-935881"
	Mar 19 19:25:46 newest-cni-935881 kubelet[633]: I0319 19:25:46.948267     633 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-935881"
	Mar 19 19:25:47 newest-cni-935881 kubelet[633]: I0319 19:25:47.229996     633 apiserver.go:52] "Watching apiserver"
	Mar 19 19:25:47 newest-cni-935881 kubelet[633]: I0319 19:25:47.348357     633 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Mar 19 19:25:47 newest-cni-935881 kubelet[633]: I0319 19:25:47.386012     633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/22af2f85-1811-454c-ae2a-6a04d7339cb0-xtables-lock\") pod \"kindnet-m8hgm\" (UID: \"22af2f85-1811-454c-ae2a-6a04d7339cb0\") " pod="kube-system/kindnet-m8hgm"
	Mar 19 19:25:47 newest-cni-935881 kubelet[633]: I0319 19:25:47.386091     633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/22af2f85-1811-454c-ae2a-6a04d7339cb0-lib-modules\") pod \"kindnet-m8hgm\" (UID: \"22af2f85-1811-454c-ae2a-6a04d7339cb0\") " pod="kube-system/kindnet-m8hgm"
	Mar 19 19:25:47 newest-cni-935881 kubelet[633]: I0319 19:25:47.386118     633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a4429ea9-baaf-41c2-b5be-5dadc0ade822-xtables-lock\") pod \"kube-proxy-584zk\" (UID: \"a4429ea9-baaf-41c2-b5be-5dadc0ade822\") " pod="kube-system/kube-proxy-584zk"
	Mar 19 19:25:47 newest-cni-935881 kubelet[633]: I0319 19:25:47.386149     633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/22af2f85-1811-454c-ae2a-6a04d7339cb0-cni-cfg\") pod \"kindnet-m8hgm\" (UID: \"22af2f85-1811-454c-ae2a-6a04d7339cb0\") " pod="kube-system/kindnet-m8hgm"
	Mar 19 19:25:47 newest-cni-935881 kubelet[633]: I0319 19:25:47.386180     633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a4429ea9-baaf-41c2-b5be-5dadc0ade822-lib-modules\") pod \"kube-proxy-584zk\" (UID: \"a4429ea9-baaf-41c2-b5be-5dadc0ade822\") " pod="kube-system/kube-proxy-584zk"
	Mar 19 19:25:47 newest-cni-935881 kubelet[633]: I0319 19:25:47.415375     633 kubelet_node_status.go:125] "Node was previously registered" node="newest-cni-935881"
	Mar 19 19:25:47 newest-cni-935881 kubelet[633]: I0319 19:25:47.415473     633 kubelet_node_status.go:79] "Successfully registered node" node="newest-cni-935881"
	Mar 19 19:25:47 newest-cni-935881 kubelet[633]: I0319 19:25:47.415503     633 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Mar 19 19:25:47 newest-cni-935881 kubelet[633]: I0319 19:25:47.416330     633 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Mar 19 19:25:47 newest-cni-935881 kubelet[633]: I0319 19:25:47.451412     633 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Mar 19 19:25:47 newest-cni-935881 kubelet[633]: E0319 19:25:47.551446     633 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-935881\" already exists" pod="kube-system/kube-controller-manager-newest-cni-935881"
	Mar 19 19:25:47 newest-cni-935881 kubelet[633]: E0319 19:25:47.552992     633 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-935881\" already exists" pod="kube-system/kube-controller-manager-newest-cni-935881"
	Mar 19 19:25:47 newest-cni-935881 kubelet[633]: I0319 19:25:47.553021     633 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-935881"
	Mar 19 19:25:47 newest-cni-935881 kubelet[633]: W0319 19:25:47.629062     633 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/e5afa4e1b83eced6141a8bd03353d14a0e35d3fd0013b5c321e129cb5f7afaac/crio-76bdd733361738fff8a172bc04d0fd79fa200bcc559a83ef8d55c758eb9e8fde WatchSource:0}: Error finding container 76bdd733361738fff8a172bc04d0fd79fa200bcc559a83ef8d55c758eb9e8fde: Status 404 returned error can't find the container with id 76bdd733361738fff8a172bc04d0fd79fa200bcc559a83ef8d55c758eb9e8fde
	Mar 19 19:25:47 newest-cni-935881 kubelet[633]: E0319 19:25:47.684356     633 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-935881\" already exists" pod="kube-system/kube-scheduler-newest-cni-935881"
	Mar 19 19:25:47 newest-cni-935881 kubelet[633]: I0319 19:25:47.684399     633 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-935881"
	Mar 19 19:25:47 newest-cni-935881 kubelet[633]: E0319 19:25:47.756534     633 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-935881\" already exists" pod="kube-system/etcd-newest-cni-935881"
	Mar 19 19:25:47 newest-cni-935881 kubelet[633]: I0319 19:25:47.756567     633 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-935881"
	Mar 19 19:25:47 newest-cni-935881 kubelet[633]: E0319 19:25:47.847944     633 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-935881\" already exists" pod="kube-system/kube-apiserver-newest-cni-935881"
	Mar 19 19:25:50 newest-cni-935881 kubelet[633]: E0319 19:25:50.305292     633 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1742412350305091729,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:135892,},InodesUsed:&UInt64Value{Value:63,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Mar 19 19:25:50 newest-cni-935881 kubelet[633]: E0319 19:25:50.305324     633 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1742412350305091729,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:135892,},InodesUsed:&UInt64Value{Value:63,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-935881 -n newest-cni-935881
helpers_test.go:261: (dbg) Run:  kubectl --context newest-cni-935881 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: coredns-668d6bf9bc-zjltz metrics-server-f79f97bbb-9s77l storage-provisioner dashboard-metrics-scraper-86c6bf9756-bltkx kubernetes-dashboard-7779f9b69b-82hnf
helpers_test.go:274: ======> post-mortem[TestStartStop/group/newest-cni/serial/VerifyKubernetesImages]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context newest-cni-935881 describe pod coredns-668d6bf9bc-zjltz metrics-server-f79f97bbb-9s77l storage-provisioner dashboard-metrics-scraper-86c6bf9756-bltkx kubernetes-dashboard-7779f9b69b-82hnf
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context newest-cni-935881 describe pod coredns-668d6bf9bc-zjltz metrics-server-f79f97bbb-9s77l storage-provisioner dashboard-metrics-scraper-86c6bf9756-bltkx kubernetes-dashboard-7779f9b69b-82hnf: exit status 1 (89.835016ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-668d6bf9bc-zjltz" not found
	Error from server (NotFound): pods "metrics-server-f79f97bbb-9s77l" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-86c6bf9756-bltkx" not found
	Error from server (NotFound): pods "kubernetes-dashboard-7779f9b69b-82hnf" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context newest-cni-935881 describe pod coredns-668d6bf9bc-zjltz metrics-server-f79f97bbb-9s77l storage-provisioner dashboard-metrics-scraper-86c6bf9756-bltkx kubernetes-dashboard-7779f9b69b-82hnf: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (2.88s)

                                                
                                    

Test pass (297/331)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 7.83
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.09
9 TestDownloadOnly/v1.20.0/DeleteAll 0.21
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.32.2/json-events 5.9
13 TestDownloadOnly/v1.32.2/preload-exists 0
17 TestDownloadOnly/v1.32.2/LogsDuration 0.09
18 TestDownloadOnly/v1.32.2/DeleteAll 0.22
19 TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.59
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 180.43
31 TestAddons/serial/GCPAuth/Namespaces 0.21
32 TestAddons/serial/GCPAuth/FakeCredentials 10.97
35 TestAddons/parallel/Registry 17.55
37 TestAddons/parallel/InspektorGadget 11.78
38 TestAddons/parallel/MetricsServer 5.81
40 TestAddons/parallel/CSI 57.07
41 TestAddons/parallel/Headlamp 18.04
42 TestAddons/parallel/CloudSpanner 6.58
43 TestAddons/parallel/LocalPath 51.74
44 TestAddons/parallel/NvidiaDevicePlugin 6.63
45 TestAddons/parallel/Yakd 11.88
47 TestAddons/StoppedEnableDisable 12.21
48 TestCertOptions 36.42
49 TestCertExpiration 234.27
51 TestForceSystemdFlag 34.22
52 TestForceSystemdEnv 45.03
58 TestErrorSpam/setup 30.06
59 TestErrorSpam/start 0.79
60 TestErrorSpam/status 1.22
61 TestErrorSpam/pause 1.77
62 TestErrorSpam/unpause 1.76
63 TestErrorSpam/stop 1.45
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 51.86
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 37.47
70 TestFunctional/serial/KubeContext 0.07
71 TestFunctional/serial/KubectlGetPods 0.1
74 TestFunctional/serial/CacheCmd/cache/add_remote 4.48
75 TestFunctional/serial/CacheCmd/cache/add_local 1.45
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.07
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.35
79 TestFunctional/serial/CacheCmd/cache/cache_reload 2.14
80 TestFunctional/serial/CacheCmd/cache/delete 0.13
81 TestFunctional/serial/MinikubeKubectlCmd 0.14
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
83 TestFunctional/serial/ExtraConfig 34.29
84 TestFunctional/serial/ComponentHealth 0.1
85 TestFunctional/serial/LogsCmd 1.79
86 TestFunctional/serial/LogsFileCmd 1.77
87 TestFunctional/serial/InvalidService 4.75
89 TestFunctional/parallel/ConfigCmd 0.48
90 TestFunctional/parallel/DashboardCmd 13.98
91 TestFunctional/parallel/DryRun 0.42
92 TestFunctional/parallel/InternationalLanguage 0.21
93 TestFunctional/parallel/StatusCmd 1.03
97 TestFunctional/parallel/ServiceCmdConnect 12.72
98 TestFunctional/parallel/AddonsCmd 0.19
99 TestFunctional/parallel/PersistentVolumeClaim 25.15
101 TestFunctional/parallel/SSHCmd 0.76
102 TestFunctional/parallel/CpCmd 2.35
104 TestFunctional/parallel/FileSync 0.39
105 TestFunctional/parallel/CertSync 2.6
109 TestFunctional/parallel/NodeLabels 0.14
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.73
113 TestFunctional/parallel/License 0.26
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.62
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.45
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.13
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
125 TestFunctional/parallel/ServiceCmd/DeployApp 6.22
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
127 TestFunctional/parallel/ProfileCmd/profile_list 0.42
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.4
129 TestFunctional/parallel/MountCmd/any-port 9.48
130 TestFunctional/parallel/ServiceCmd/List 0.68
131 TestFunctional/parallel/ServiceCmd/JSONOutput 0.6
132 TestFunctional/parallel/ServiceCmd/HTTPS 0.47
133 TestFunctional/parallel/ServiceCmd/Format 0.37
134 TestFunctional/parallel/ServiceCmd/URL 0.36
135 TestFunctional/parallel/MountCmd/specific-port 2.16
136 TestFunctional/parallel/MountCmd/VerifyCleanup 2.78
137 TestFunctional/parallel/Version/short 0.1
138 TestFunctional/parallel/Version/components 1.34
139 TestFunctional/parallel/ImageCommands/ImageListShort 0.25
140 TestFunctional/parallel/ImageCommands/ImageListTable 0.27
141 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
142 TestFunctional/parallel/ImageCommands/ImageListYaml 0.29
143 TestFunctional/parallel/ImageCommands/ImageBuild 3.77
144 TestFunctional/parallel/ImageCommands/Setup 0.69
145 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.36
146 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.91
147 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.34
148 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.75
149 TestFunctional/parallel/ImageCommands/ImageRemove 0.7
150 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.99
151 TestFunctional/parallel/UpdateContextCmd/no_changes 0.17
152 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.21
153 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.17
154 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.63
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 181.27
163 TestMultiControlPlane/serial/DeployApp 8.7
164 TestMultiControlPlane/serial/PingHostFromPods 1.65
165 TestMultiControlPlane/serial/AddWorkerNode 34.89
166 TestMultiControlPlane/serial/NodeLabels 0.11
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.01
168 TestMultiControlPlane/serial/CopyFile 19.09
169 TestMultiControlPlane/serial/StopSecondaryNode 12.76
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.82
171 TestMultiControlPlane/serial/RestartSecondaryNode 23.26
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.42
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 204.96
174 TestMultiControlPlane/serial/DeleteSecondaryNode 12.5
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.71
176 TestMultiControlPlane/serial/StopCluster 35.86
177 TestMultiControlPlane/serial/RestartCluster 96.22
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.76
179 TestMultiControlPlane/serial/AddSecondaryNode 72.76
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.99
184 TestJSONOutput/start/Command 48.64
185 TestJSONOutput/start/Audit 0
187 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/pause/Command 0.73
191 TestJSONOutput/pause/Audit 0
193 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/unpause/Command 0.67
197 TestJSONOutput/unpause/Audit 0
199 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/stop/Command 5.84
203 TestJSONOutput/stop/Audit 0
205 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
207 TestErrorJSONOutput 0.25
209 TestKicCustomNetwork/create_custom_network 39.5
210 TestKicCustomNetwork/use_default_bridge_network 32.71
211 TestKicExistingNetwork 32.96
212 TestKicCustomSubnet 31.38
213 TestKicStaticIP 34.42
214 TestMainNoArgs 0.06
215 TestMinikubeProfile 69.08
218 TestMountStart/serial/StartWithMountFirst 6.45
219 TestMountStart/serial/VerifyMountFirst 0.26
220 TestMountStart/serial/StartWithMountSecond 9.2
221 TestMountStart/serial/VerifyMountSecond 0.26
222 TestMountStart/serial/DeleteFirst 1.62
223 TestMountStart/serial/VerifyMountPostDelete 0.26
224 TestMountStart/serial/Stop 1.22
225 TestMountStart/serial/RestartStopped 7.82
226 TestMountStart/serial/VerifyMountPostStop 0.25
229 TestMultiNode/serial/FreshStart2Nodes 82.17
230 TestMultiNode/serial/DeployApp2Nodes 7.45
231 TestMultiNode/serial/PingHostFrom2Pods 1.01
232 TestMultiNode/serial/AddNode 30.38
233 TestMultiNode/serial/MultiNodeLabels 0.09
234 TestMultiNode/serial/ProfileList 0.67
235 TestMultiNode/serial/CopyFile 10.13
236 TestMultiNode/serial/StopNode 2.22
237 TestMultiNode/serial/StartAfterStop 10.33
238 TestMultiNode/serial/RestartKeepsNodes 82.32
239 TestMultiNode/serial/DeleteNode 5.3
240 TestMultiNode/serial/StopMultiNode 23.85
241 TestMultiNode/serial/RestartMultiNode 57.25
242 TestMultiNode/serial/ValidateNameConflict 33.55
247 TestPreload 129.55
249 TestScheduledStopUnix 110.94
252 TestInsufficientStorage 13.19
253 TestRunningBinaryUpgrade 70.84
255 TestKubernetesUpgrade 240.67
256 TestMissingContainerUpgrade 165.16
258 TestPause/serial/Start 55.9
259 TestPause/serial/SecondStartNoReconfiguration 26.68
260 TestPause/serial/Pause 0.88
261 TestPause/serial/VerifyStatus 0.37
262 TestPause/serial/Unpause 0.86
263 TestPause/serial/PauseAgain 1.26
264 TestPause/serial/DeletePaused 3.22
265 TestPause/serial/VerifyDeletedResources 3.45
266 TestStoppedBinaryUpgrade/Setup 0.84
267 TestStoppedBinaryUpgrade/Upgrade 75.06
268 TestStoppedBinaryUpgrade/MinikubeLogs 1.23
277 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
278 TestNoKubernetes/serial/StartWithK8s 42.82
286 TestNetworkPlugins/group/false 5.48
290 TestNoKubernetes/serial/StartWithStopK8s 22.53
291 TestNoKubernetes/serial/Start 7.29
292 TestNoKubernetes/serial/VerifyK8sNotRunning 0.43
293 TestNoKubernetes/serial/ProfileList 2.96
294 TestNoKubernetes/serial/Stop 1.27
295 TestNoKubernetes/serial/StartNoArgs 8.27
296 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.26
298 TestStartStop/group/old-k8s-version/serial/FirstStart 165.26
300 TestStartStop/group/no-preload/serial/FirstStart 67.17
301 TestStartStop/group/old-k8s-version/serial/DeployApp 10.69
302 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.09
303 TestStartStop/group/old-k8s-version/serial/Stop 12.04
304 TestStartStop/group/no-preload/serial/DeployApp 9.4
305 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
306 TestStartStop/group/old-k8s-version/serial/SecondStart 143.7
307 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.57
308 TestStartStop/group/no-preload/serial/Stop 12.15
309 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.35
310 TestStartStop/group/no-preload/serial/SecondStart 270.08
311 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
312 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
313 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
314 TestStartStop/group/old-k8s-version/serial/Pause 3.12
316 TestStartStop/group/embed-certs/serial/FirstStart 52.94
317 TestStartStop/group/embed-certs/serial/DeployApp 9.34
318 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.09
319 TestStartStop/group/embed-certs/serial/Stop 11.94
320 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
321 TestStartStop/group/embed-certs/serial/SecondStart 277.49
322 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
323 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.09
324 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.23
325 TestStartStop/group/no-preload/serial/Pause 3.18
327 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 50.5
328 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.38
329 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.15
330 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.97
331 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
332 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 301.84
333 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
334 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
335 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
336 TestStartStop/group/embed-certs/serial/Pause 3.12
338 TestStartStop/group/newest-cni/serial/FirstStart 33.53
339 TestStartStop/group/newest-cni/serial/DeployApp 0
340 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.32
341 TestStartStop/group/newest-cni/serial/Stop 1.31
342 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
343 TestStartStop/group/newest-cni/serial/SecondStart 16.93
344 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
345 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
347 TestStartStop/group/newest-cni/serial/Pause 2.9
348 TestNetworkPlugins/group/auto/Start 48.96
349 TestNetworkPlugins/group/auto/KubeletFlags 0.29
350 TestNetworkPlugins/group/auto/NetCatPod 10.3
351 TestNetworkPlugins/group/auto/DNS 0.17
352 TestNetworkPlugins/group/auto/Localhost 0.15
353 TestNetworkPlugins/group/auto/HairPin 0.16
354 TestNetworkPlugins/group/kindnet/Start 56.74
355 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
356 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.12
357 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.27
358 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.7
359 TestNetworkPlugins/group/calico/Start 68.72
360 TestNetworkPlugins/group/kindnet/ControllerPod 6
361 TestNetworkPlugins/group/kindnet/KubeletFlags 0.34
362 TestNetworkPlugins/group/kindnet/NetCatPod 14.32
363 TestNetworkPlugins/group/kindnet/DNS 0.34
364 TestNetworkPlugins/group/kindnet/Localhost 0.21
365 TestNetworkPlugins/group/kindnet/HairPin 0.25
366 TestNetworkPlugins/group/calico/ControllerPod 6.01
367 TestNetworkPlugins/group/custom-flannel/Start 57.34
368 TestNetworkPlugins/group/calico/KubeletFlags 0.39
369 TestNetworkPlugins/group/calico/NetCatPod 12.47
370 TestNetworkPlugins/group/calico/DNS 0.27
371 TestNetworkPlugins/group/calico/Localhost 0.18
372 TestNetworkPlugins/group/calico/HairPin 0.21
373 TestNetworkPlugins/group/enable-default-cni/Start 81.48
374 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.38
375 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.48
376 TestNetworkPlugins/group/custom-flannel/DNS 0.28
377 TestNetworkPlugins/group/custom-flannel/Localhost 0.23
378 TestNetworkPlugins/group/custom-flannel/HairPin 0.22
379 TestNetworkPlugins/group/flannel/Start 59.3
380 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.39
381 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.4
382 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
383 TestNetworkPlugins/group/enable-default-cni/Localhost 0.19
384 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
385 TestNetworkPlugins/group/flannel/ControllerPod 6
386 TestNetworkPlugins/group/bridge/Start 74.05
387 TestNetworkPlugins/group/flannel/KubeletFlags 0.36
388 TestNetworkPlugins/group/flannel/NetCatPod 13.33
389 TestNetworkPlugins/group/flannel/DNS 0.24
390 TestNetworkPlugins/group/flannel/Localhost 0.2
391 TestNetworkPlugins/group/flannel/HairPin 0.21
392 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
393 TestNetworkPlugins/group/bridge/NetCatPod 10.27
394 TestNetworkPlugins/group/bridge/DNS 0.18
395 TestNetworkPlugins/group/bridge/Localhost 0.15
396 TestNetworkPlugins/group/bridge/HairPin 0.14
x
+
TestDownloadOnly/v1.20.0/json-events (7.83s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-461100 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-461100 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (7.828751888s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (7.83s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0319 18:25:18.962650  453411 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0319 18:25:18.962738  453411 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20544-448023/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-461100
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-461100: exit status 85 (92.428407ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-461100 | jenkins | v1.35.0 | 19 Mar 25 18:25 UTC |          |
	|         | -p download-only-461100        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/03/19 18:25:11
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0319 18:25:11.183528  453417 out.go:345] Setting OutFile to fd 1 ...
	I0319 18:25:11.183660  453417 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0319 18:25:11.183670  453417 out.go:358] Setting ErrFile to fd 2...
	I0319 18:25:11.183676  453417 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0319 18:25:11.183924  453417 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20544-448023/.minikube/bin
	W0319 18:25:11.184054  453417 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20544-448023/.minikube/config/config.json: open /home/jenkins/minikube-integration/20544-448023/.minikube/config/config.json: no such file or directory
	I0319 18:25:11.184464  453417 out.go:352] Setting JSON to true
	I0319 18:25:11.185396  453417 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7645,"bootTime":1742401066,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1077-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0319 18:25:11.185466  453417 start.go:139] virtualization:  
	I0319 18:25:11.189894  453417 out.go:97] [download-only-461100] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	W0319 18:25:11.190078  453417 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20544-448023/.minikube/cache/preloaded-tarball: no such file or directory
	I0319 18:25:11.190183  453417 notify.go:220] Checking for updates...
	I0319 18:25:11.193747  453417 out.go:169] MINIKUBE_LOCATION=20544
	I0319 18:25:11.196696  453417 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0319 18:25:11.199678  453417 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20544-448023/kubeconfig
	I0319 18:25:11.202531  453417 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20544-448023/.minikube
	I0319 18:25:11.205401  453417 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0319 18:25:11.211021  453417 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0319 18:25:11.211297  453417 driver.go:394] Setting default libvirt URI to qemu:///system
	I0319 18:25:11.244529  453417 docker.go:123] docker version: linux-28.0.2:Docker Engine - Community
	I0319 18:25:11.244659  453417 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0319 18:25:11.300887  453417 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:true NGoroutines:53 SystemTime:2025-03-19 18:25:11.291908674 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
	I0319 18:25:11.301009  453417 docker.go:318] overlay module found
	I0319 18:25:11.303908  453417 out.go:97] Using the docker driver based on user configuration
	I0319 18:25:11.303953  453417 start.go:297] selected driver: docker
	I0319 18:25:11.303961  453417 start.go:901] validating driver "docker" against <nil>
	I0319 18:25:11.304096  453417 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0319 18:25:11.356793  453417 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:true NGoroutines:53 SystemTime:2025-03-19 18:25:11.347219453 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
	I0319 18:25:11.356958  453417 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0319 18:25:11.357282  453417 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0319 18:25:11.357489  453417 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0319 18:25:11.360516  453417 out.go:169] Using Docker driver with root privileges
	I0319 18:25:11.363340  453417 cni.go:84] Creating CNI manager for ""
	I0319 18:25:11.363424  453417 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0319 18:25:11.363437  453417 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0319 18:25:11.363535  453417 start.go:340] cluster config:
	{Name:download-only-461100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-461100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 18:25:11.366568  453417 out.go:97] Starting "download-only-461100" primary control-plane node in "download-only-461100" cluster
	I0319 18:25:11.366607  453417 cache.go:121] Beginning downloading kic base image for docker with crio
	I0319 18:25:11.369523  453417 out.go:97] Pulling base image v0.0.46-1741860993-20523 ...
	I0319 18:25:11.369573  453417 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0319 18:25:11.369671  453417 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon
	I0319 18:25:11.387664  453417 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 to local cache
	I0319 18:25:11.387860  453417 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local cache directory
	I0319 18:25:11.387964  453417 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 to local cache
	I0319 18:25:11.436041  453417 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I0319 18:25:11.436132  453417 cache.go:56] Caching tarball of preloaded images
	I0319 18:25:11.437041  453417 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0319 18:25:11.440590  453417 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0319 18:25:11.440622  453417 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I0319 18:25:11.534233  453417 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:59cd2ef07b53f039bfd1761b921f2a02 -> /home/jenkins/minikube-integration/20544-448023/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-461100 host does not exist
	  To start a cluster, run: "minikube start -p download-only-461100"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-461100
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/json-events (5.9s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-818659 --force --alsologtostderr --kubernetes-version=v1.32.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-818659 --force --alsologtostderr --kubernetes-version=v1.32.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.901656132s)
--- PASS: TestDownloadOnly/v1.32.2/json-events (5.90s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/preload-exists
I0319 18:25:25.316794  453411 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
I0319 18:25:25.316833  453411 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20544-448023/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-818659
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-818659: exit status 85 (91.758628ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-461100 | jenkins | v1.35.0 | 19 Mar 25 18:25 UTC |                     |
	|         | -p download-only-461100        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.35.0 | 19 Mar 25 18:25 UTC | 19 Mar 25 18:25 UTC |
	| delete  | -p download-only-461100        | download-only-461100 | jenkins | v1.35.0 | 19 Mar 25 18:25 UTC | 19 Mar 25 18:25 UTC |
	| start   | -o=json --download-only        | download-only-818659 | jenkins | v1.35.0 | 19 Mar 25 18:25 UTC |                     |
	|         | -p download-only-818659        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/03/19 18:25:19
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0319 18:25:19.461551  453619 out.go:345] Setting OutFile to fd 1 ...
	I0319 18:25:19.461742  453619 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0319 18:25:19.461756  453619 out.go:358] Setting ErrFile to fd 2...
	I0319 18:25:19.461762  453619 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0319 18:25:19.462051  453619 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20544-448023/.minikube/bin
	I0319 18:25:19.462478  453619 out.go:352] Setting JSON to true
	I0319 18:25:19.463415  453619 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7653,"bootTime":1742401066,"procs":163,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1077-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0319 18:25:19.463477  453619 start.go:139] virtualization:  
	I0319 18:25:19.466827  453619 out.go:97] [download-only-818659] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0319 18:25:19.467090  453619 notify.go:220] Checking for updates...
	I0319 18:25:19.469926  453619 out.go:169] MINIKUBE_LOCATION=20544
	I0319 18:25:19.472886  453619 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0319 18:25:19.475784  453619 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20544-448023/kubeconfig
	I0319 18:25:19.478726  453619 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20544-448023/.minikube
	I0319 18:25:19.481529  453619 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0319 18:25:19.487306  453619 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0319 18:25:19.487582  453619 driver.go:394] Setting default libvirt URI to qemu:///system
	I0319 18:25:19.511604  453619 docker.go:123] docker version: linux-28.0.2:Docker Engine - Community
	I0319 18:25:19.511731  453619 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0319 18:25:19.566634  453619 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:44 SystemTime:2025-03-19 18:25:19.558005049 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
	I0319 18:25:19.566751  453619 docker.go:318] overlay module found
	I0319 18:25:19.569697  453619 out.go:97] Using the docker driver based on user configuration
	I0319 18:25:19.569723  453619 start.go:297] selected driver: docker
	I0319 18:25:19.569735  453619 start.go:901] validating driver "docker" against <nil>
	I0319 18:25:19.569994  453619 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0319 18:25:19.622221  453619 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:44 SystemTime:2025-03-19 18:25:19.6129986 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Serv
erErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
	I0319 18:25:19.622379  453619 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0319 18:25:19.622709  453619 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0319 18:25:19.622864  453619 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0319 18:25:19.625915  453619 out.go:169] Using Docker driver with root privileges
	I0319 18:25:19.628578  453619 cni.go:84] Creating CNI manager for ""
	I0319 18:25:19.628649  453619 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0319 18:25:19.628663  453619 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0319 18:25:19.628748  453619 start.go:340] cluster config:
	{Name:download-only-818659 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:download-only-818659 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 18:25:19.631715  453619 out.go:97] Starting "download-only-818659" primary control-plane node in "download-only-818659" cluster
	I0319 18:25:19.631741  453619 cache.go:121] Beginning downloading kic base image for docker with crio
	I0319 18:25:19.634567  453619 out.go:97] Pulling base image v0.0.46-1741860993-20523 ...
	I0319 18:25:19.634595  453619 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0319 18:25:19.634709  453619 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon
	I0319 18:25:19.650810  453619 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 to local cache
	I0319 18:25:19.650930  453619 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local cache directory
	I0319 18:25:19.650954  453619 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local cache directory, skipping pull
	I0319 18:25:19.650960  453619 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 exists in cache, skipping pull
	I0319 18:25:19.650967  453619 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 as a tarball
	I0319 18:25:19.696401  453619 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.2/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-arm64.tar.lz4
	I0319 18:25:19.696440  453619 cache.go:56] Caching tarball of preloaded images
	I0319 18:25:19.697216  453619 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0319 18:25:19.700264  453619 out.go:97] Downloading Kubernetes v1.32.2 preload ...
	I0319 18:25:19.700290  453619 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-arm64.tar.lz4 ...
	I0319 18:25:19.789284  453619 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.2/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-arm64.tar.lz4?checksum=md5:40a74f4030ed7e841ef78821ba704831 -> /home/jenkins/minikube-integration/20544-448023/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-arm64.tar.lz4
	I0319 18:25:23.401424  453619 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-arm64.tar.lz4 ...
	I0319 18:25:23.401563  453619 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20544-448023/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-arm64.tar.lz4 ...
	I0319 18:25:24.275435  453619 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0319 18:25:24.275813  453619 profile.go:143] Saving config to /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/download-only-818659/config.json ...
	I0319 18:25:24.275847  453619 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/download-only-818659/config.json: {Name:mk3c4f91788ce3d402dbdc7ed6022127405f6372 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 18:25:24.276036  453619 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0319 18:25:24.276209  453619 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.2/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/20544-448023/.minikube/cache/linux/arm64/v1.32.2/kubectl
	
	
	* The control-plane node download-only-818659 host does not exist
	  To start a cluster, run: "minikube start -p download-only-818659"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.2/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.32.2/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-818659
--- PASS: TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
I0319 18:25:26.610147  453411 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-745400 --alsologtostderr --binary-mirror http://127.0.0.1:34039 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-745400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-745400
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-039972
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-039972: exit status 85 (73.971739ms)

                                                
                                                
-- stdout --
	* Profile "addons-039972" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-039972"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-039972
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-039972: exit status 85 (77.120403ms)

                                                
                                                
-- stdout --
	* Profile "addons-039972" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-039972"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (180.43s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-arm64 start -p addons-039972 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-arm64 start -p addons-039972 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m0.430024588s)
--- PASS: TestAddons/Setup (180.43s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.21s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-039972 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-039972 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.21s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.97s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-039972 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-039972 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [922db505-c671-4161-8997-814725d2988d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [922db505-c671-4161-8997-814725d2988d] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.003570256s
addons_test.go:633: (dbg) Run:  kubectl --context addons-039972 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-039972 describe sa gcp-auth-test
addons_test.go:659: (dbg) Run:  kubectl --context addons-039972 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:683: (dbg) Run:  kubectl --context addons-039972 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.97s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.55s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 14.437528ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c88467877-d2brd" [9123c314-3886-4f1c-aacf-b378fea5fb39] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.005632326s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-f7zs5" [70bb0c85-0653-4b37-8242-48ad64e1e791] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.003736811s
addons_test.go:331: (dbg) Run:  kubectl --context addons-039972 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-039972 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-039972 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.565549577s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-arm64 -p addons-039972 ip
2025/03/19 18:29:04 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-039972 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.55s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.78s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-tmnzh" [f59103a1-8105-4513-af28-2efe963fd744] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003926593s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-039972 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-039972 addons disable inspektor-gadget --alsologtostderr -v=1: (5.772616363s)
--- PASS: TestAddons/parallel/InspektorGadget (11.78s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.81s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 12.168892ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-xtj74" [e1fa4fa3-31a6-4db3-a237-73516e02c68c] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003396489s
addons_test.go:402: (dbg) Run:  kubectl --context addons-039972 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-039972 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.81s)

                                                
                                    
x
+
TestAddons/parallel/CSI (57.07s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0319 18:29:29.533689  453411 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0319 18:29:29.539496  453411 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0319 18:29:29.539526  453411 kapi.go:107] duration metric: took 8.790762ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 8.803094ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-039972 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-039972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-039972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-039972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-039972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-039972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-039972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-039972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-039972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-039972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-039972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-039972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-039972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-039972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-039972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-039972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-039972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-039972 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-039972 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [ae044f71-eb7c-4d7b-8252-4e59b8a185ea] Pending
helpers_test.go:344: "task-pv-pod" [ae044f71-eb7c-4d7b-8252-4e59b8a185ea] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [ae044f71-eb7c-4d7b-8252-4e59b8a185ea] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.003574336s
addons_test.go:511: (dbg) Run:  kubectl --context addons-039972 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-039972 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-039972 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-039972 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-039972 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-039972 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-039972 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-039972 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-039972 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-039972 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-039972 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-039972 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-039972 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-039972 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-039972 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-039972 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-039972 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-039972 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [4190286b-8f85-4671-b992-ce15d1c7eae8] Pending
helpers_test.go:344: "task-pv-pod-restore" [4190286b-8f85-4671-b992-ce15d1c7eae8] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [4190286b-8f85-4671-b992-ce15d1c7eae8] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.006635748s
addons_test.go:553: (dbg) Run:  kubectl --context addons-039972 delete pod task-pv-pod-restore
addons_test.go:553: (dbg) Done: kubectl --context addons-039972 delete pod task-pv-pod-restore: (1.461313613s)
addons_test.go:557: (dbg) Run:  kubectl --context addons-039972 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-039972 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-039972 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-039972 addons disable volumesnapshots --alsologtostderr -v=1: (1.431923469s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-039972 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-039972 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.899596184s)
--- PASS: TestAddons/parallel/CSI (57.07s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.04s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-039972 --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5d4b5d7bd6-29rg4" [8404b232-fe72-4357-b3e3-5a7287a2c598] Pending
helpers_test.go:344: "headlamp-5d4b5d7bd6-29rg4" [8404b232-fe72-4357-b3e3-5a7287a2c598] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5d4b5d7bd6-29rg4" [8404b232-fe72-4357-b3e3-5a7287a2c598] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.003150294s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-039972 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-039972 addons disable headlamp --alsologtostderr -v=1: (6.058186428s)
--- PASS: TestAddons/parallel/Headlamp (18.04s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.58s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-cc9755fc7-vmcl2" [e276310d-b75c-4499-869c-46fc8b134186] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003176439s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-039972 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.58s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (51.74s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-039972 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-039972 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-039972 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-039972 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-039972 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-039972 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-039972 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [dcefd93f-ab7d-48b2-bc71-46c4901a9a72] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [dcefd93f-ab7d-48b2-bc71-46c4901a9a72] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [dcefd93f-ab7d-48b2-bc71-46c4901a9a72] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003318009s
addons_test.go:906: (dbg) Run:  kubectl --context addons-039972 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-arm64 -p addons-039972 ssh "cat /opt/local-path-provisioner/pvc-e1416b69-1a54-4b11-ad23-fffdc53b9f83_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-039972 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-039972 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-039972 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-039972 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.575543951s)
--- PASS: TestAddons/parallel/LocalPath (51.74s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.63s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-6qm78" [784e0b38-6971-40b8-b4b3-940ba70d5823] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.006817128s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-039972 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.63s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.88s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-wj8wb" [358065b8-1359-498f-96f0-4feaf77e01a9] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003375357s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-039972 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-039972 addons disable yakd --alsologtostderr -v=1: (5.87218564s)
--- PASS: TestAddons/parallel/Yakd (11.88s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.21s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-039972
addons_test.go:170: (dbg) Done: out/minikube-linux-arm64 stop -p addons-039972: (11.917079364s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-039972
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-039972
addons_test.go:183: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-039972
--- PASS: TestAddons/StoppedEnableDisable (12.21s)

                                                
                                    
x
+
TestCertOptions (36.42s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-207873 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-207873 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (33.761097278s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-207873 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-207873 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-207873 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-207873" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-207873
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-207873: (1.991054997s)
--- PASS: TestCertOptions (36.42s)

                                                
                                    
x
+
TestCertExpiration (234.27s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-187783 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
E0319 19:11:12.869929  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/functional-160492/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-187783 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (35.535787577s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-187783 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-187783 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (16.301372757s)
helpers_test.go:175: Cleaning up "cert-expiration-187783" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-187783
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-187783: (2.430209469s)
--- PASS: TestCertExpiration (234.27s)

                                                
                                    
x
+
TestForceSystemdFlag (34.22s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-754253 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-754253 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (31.471633313s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-754253 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-754253" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-754253
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-754253: (2.445399172s)
--- PASS: TestForceSystemdFlag (34.22s)

                                                
                                    
x
+
TestForceSystemdEnv (45.03s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-263695 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-263695 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (42.395401494s)
helpers_test.go:175: Cleaning up "force-systemd-env-263695" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-263695
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-263695: (2.636416995s)
--- PASS: TestForceSystemdEnv (45.03s)

                                                
                                    
x
+
TestErrorSpam/setup (30.06s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-759142 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-759142 --driver=docker  --container-runtime=crio
E0319 18:33:28.576090  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/addons-039972/client.crt: no such file or directory" logger="UnhandledError"
E0319 18:33:28.583101  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/addons-039972/client.crt: no such file or directory" logger="UnhandledError"
E0319 18:33:28.594483  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/addons-039972/client.crt: no such file or directory" logger="UnhandledError"
E0319 18:33:28.615874  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/addons-039972/client.crt: no such file or directory" logger="UnhandledError"
E0319 18:33:28.657243  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/addons-039972/client.crt: no such file or directory" logger="UnhandledError"
E0319 18:33:28.738688  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/addons-039972/client.crt: no such file or directory" logger="UnhandledError"
E0319 18:33:28.900343  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/addons-039972/client.crt: no such file or directory" logger="UnhandledError"
E0319 18:33:29.222022  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/addons-039972/client.crt: no such file or directory" logger="UnhandledError"
E0319 18:33:29.863993  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/addons-039972/client.crt: no such file or directory" logger="UnhandledError"
E0319 18:33:31.145252  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/addons-039972/client.crt: no such file or directory" logger="UnhandledError"
E0319 18:33:33.707489  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/addons-039972/client.crt: no such file or directory" logger="UnhandledError"
E0319 18:33:38.829660  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/addons-039972/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-759142 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-759142 --driver=docker  --container-runtime=crio: (30.056508579s)
--- PASS: TestErrorSpam/setup (30.06s)

                                                
                                    
x
+
TestErrorSpam/start (0.79s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-759142 --log_dir /tmp/nospam-759142 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-759142 --log_dir /tmp/nospam-759142 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-759142 --log_dir /tmp/nospam-759142 start --dry-run
--- PASS: TestErrorSpam/start (0.79s)

                                                
                                    
x
+
TestErrorSpam/status (1.22s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-759142 --log_dir /tmp/nospam-759142 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-759142 --log_dir /tmp/nospam-759142 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-759142 --log_dir /tmp/nospam-759142 status
--- PASS: TestErrorSpam/status (1.22s)

                                                
                                    
x
+
TestErrorSpam/pause (1.77s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-759142 --log_dir /tmp/nospam-759142 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-759142 --log_dir /tmp/nospam-759142 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-759142 --log_dir /tmp/nospam-759142 pause
--- PASS: TestErrorSpam/pause (1.77s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.76s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-759142 --log_dir /tmp/nospam-759142 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-759142 --log_dir /tmp/nospam-759142 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-759142 --log_dir /tmp/nospam-759142 unpause
--- PASS: TestErrorSpam/unpause (1.76s)

                                                
                                    
x
+
TestErrorSpam/stop (1.45s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-759142 --log_dir /tmp/nospam-759142 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-759142 --log_dir /tmp/nospam-759142 stop: (1.238845099s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-759142 --log_dir /tmp/nospam-759142 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-759142 --log_dir /tmp/nospam-759142 stop
--- PASS: TestErrorSpam/stop (1.45s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1872: local sync path: /home/jenkins/minikube-integration/20544-448023/.minikube/files/etc/test/nested/copy/453411/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (51.86s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2251: (dbg) Run:  out/minikube-linux-arm64 start -p functional-160492 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E0319 18:34:09.553341  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/addons-039972/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2251: (dbg) Done: out/minikube-linux-arm64 start -p functional-160492 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (51.855681304s)
--- PASS: TestFunctional/serial/StartWithProxy (51.86s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (37.47s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0319 18:34:41.729331  453411 config.go:182] Loaded profile config "functional-160492": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
functional_test.go:676: (dbg) Run:  out/minikube-linux-arm64 start -p functional-160492 --alsologtostderr -v=8
E0319 18:34:50.514917  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/addons-039972/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:676: (dbg) Done: out/minikube-linux-arm64 start -p functional-160492 --alsologtostderr -v=8: (37.467623515s)
functional_test.go:680: soft start took 37.471557348s for "functional-160492" cluster.
I0319 18:35:19.197273  453411 config.go:182] Loaded profile config "functional-160492": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestFunctional/serial/SoftStart (37.47s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:698: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:713: (dbg) Run:  kubectl --context functional-160492 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.48s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1066: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 cache add registry.k8s.io/pause:3.1
functional_test.go:1066: (dbg) Done: out/minikube-linux-arm64 -p functional-160492 cache add registry.k8s.io/pause:3.1: (1.605728669s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 cache add registry.k8s.io/pause:3.3
functional_test.go:1066: (dbg) Done: out/minikube-linux-arm64 -p functional-160492 cache add registry.k8s.io/pause:3.3: (1.459756156s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 cache add registry.k8s.io/pause:latest
functional_test.go:1066: (dbg) Done: out/minikube-linux-arm64 -p functional-160492 cache add registry.k8s.io/pause:latest: (1.41024863s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.48s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.45s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1094: (dbg) Run:  docker build -t minikube-local-cache-test:functional-160492 /tmp/TestFunctionalserialCacheCmdcacheadd_local1885986631/001
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 cache add minikube-local-cache-test:functional-160492
functional_test.go:1111: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 cache delete minikube-local-cache-test:functional-160492
functional_test.go:1100: (dbg) Run:  docker rmi minikube-local-cache-test:functional-160492
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.45s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1119: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1127: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.35s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1141: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.35s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1164: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-160492 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (293.400133ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1175: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 cache reload
functional_test.go:1175: (dbg) Done: out/minikube-linux-arm64 -p functional-160492 cache reload: (1.208640747s)
functional_test.go:1180: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.14s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1189: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1189: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:733: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 kubectl -- --context functional-160492 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:758: (dbg) Run:  out/kubectl --context functional-160492 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (34.29s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:774: (dbg) Run:  out/minikube-linux-arm64 start -p functional-160492 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:774: (dbg) Done: out/minikube-linux-arm64 start -p functional-160492 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (34.291475056s)
functional_test.go:778: restart took 34.291576701s for "functional-160492" cluster.
I0319 18:36:02.605622  453411 config.go:182] Loaded profile config "functional-160492": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestFunctional/serial/ExtraConfig (34.29s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:827: (dbg) Run:  kubectl --context functional-160492 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:842: etcd phase: Running
functional_test.go:852: etcd status: Ready
functional_test.go:842: kube-apiserver phase: Running
functional_test.go:852: kube-apiserver status: Ready
functional_test.go:842: kube-controller-manager phase: Running
functional_test.go:852: kube-controller-manager status: Ready
functional_test.go:842: kube-scheduler phase: Running
functional_test.go:852: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.79s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1253: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 logs
functional_test.go:1253: (dbg) Done: out/minikube-linux-arm64 -p functional-160492 logs: (1.787901924s)
--- PASS: TestFunctional/serial/LogsCmd (1.79s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.77s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1267: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 logs --file /tmp/TestFunctionalserialLogsFileCmd3423460378/001/logs.txt
functional_test.go:1267: (dbg) Done: out/minikube-linux-arm64 -p functional-160492 logs --file /tmp/TestFunctionalserialLogsFileCmd3423460378/001/logs.txt: (1.765228436s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.77s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.75s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2338: (dbg) Run:  kubectl --context functional-160492 apply -f testdata/invalidsvc.yaml
functional_test.go:2352: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-160492
functional_test.go:2352: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-160492: exit status 115 (414.413249ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31455 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2344: (dbg) Run:  kubectl --context functional-160492 delete -f testdata/invalidsvc.yaml
functional_test.go:2344: (dbg) Done: kubectl --context functional-160492 delete -f testdata/invalidsvc.yaml: (1.088657074s)
--- PASS: TestFunctional/serial/InvalidService (4.75s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1216: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-160492 config get cpus: exit status 14 (87.934832ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1216: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 config set cpus 2
functional_test.go:1216: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 config get cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-160492 config get cpus: exit status 14 (67.153863ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:922: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-160492 --alsologtostderr -v=1]
functional_test.go:927: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-160492 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 481303: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.98s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-160492 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:991: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-160492 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (190.686855ms)

                                                
                                                
-- stdout --
	* [functional-160492] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20544
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20544-448023/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20544-448023/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0319 18:36:44.452287  481005 out.go:345] Setting OutFile to fd 1 ...
	I0319 18:36:44.452460  481005 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0319 18:36:44.452475  481005 out.go:358] Setting ErrFile to fd 2...
	I0319 18:36:44.452480  481005 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0319 18:36:44.452732  481005 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20544-448023/.minikube/bin
	I0319 18:36:44.453080  481005 out.go:352] Setting JSON to false
	I0319 18:36:44.454072  481005 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":8338,"bootTime":1742401066,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1077-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0319 18:36:44.454139  481005 start.go:139] virtualization:  
	I0319 18:36:44.457371  481005 out.go:177] * [functional-160492] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0319 18:36:44.460724  481005 out.go:177]   - MINIKUBE_LOCATION=20544
	I0319 18:36:44.460922  481005 notify.go:220] Checking for updates...
	I0319 18:36:44.466167  481005 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0319 18:36:44.468936  481005 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20544-448023/kubeconfig
	I0319 18:36:44.471658  481005 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20544-448023/.minikube
	I0319 18:36:44.474471  481005 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0319 18:36:44.477365  481005 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0319 18:36:44.480753  481005 config.go:182] Loaded profile config "functional-160492": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0319 18:36:44.481315  481005 driver.go:394] Setting default libvirt URI to qemu:///system
	I0319 18:36:44.504080  481005 docker.go:123] docker version: linux-28.0.2:Docker Engine - Community
	I0319 18:36:44.504208  481005 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0319 18:36:44.567609  481005 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-03-19 18:36:44.558431041 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
	I0319 18:36:44.567718  481005 docker.go:318] overlay module found
	I0319 18:36:44.570859  481005 out.go:177] * Using the docker driver based on existing profile
	I0319 18:36:44.573595  481005 start.go:297] selected driver: docker
	I0319 18:36:44.573611  481005 start.go:901] validating driver "docker" against &{Name:functional-160492 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:functional-160492 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 18:36:44.573717  481005 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0319 18:36:44.577286  481005 out.go:201] 
	W0319 18:36:44.580136  481005 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0319 18:36:44.583018  481005 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:1008: (dbg) Run:  out/minikube-linux-arm64 start -p functional-160492 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1037: (dbg) Run:  out/minikube-linux-arm64 start -p functional-160492 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-160492 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (212.714427ms)

                                                
                                                
-- stdout --
	* [functional-160492] minikube v1.35.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20544
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20544-448023/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20544-448023/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0319 18:36:44.245620  480959 out.go:345] Setting OutFile to fd 1 ...
	I0319 18:36:44.245757  480959 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0319 18:36:44.245770  480959 out.go:358] Setting ErrFile to fd 2...
	I0319 18:36:44.245775  480959 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0319 18:36:44.246764  480959 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20544-448023/.minikube/bin
	I0319 18:36:44.247157  480959 out.go:352] Setting JSON to false
	I0319 18:36:44.248146  480959 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":8338,"bootTime":1742401066,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1077-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0319 18:36:44.248222  480959 start.go:139] virtualization:  
	I0319 18:36:44.251718  480959 out.go:177] * [functional-160492] minikube v1.35.0 sur Ubuntu 20.04 (arm64)
	I0319 18:36:44.255589  480959 out.go:177]   - MINIKUBE_LOCATION=20544
	I0319 18:36:44.255629  480959 notify.go:220] Checking for updates...
	I0319 18:36:44.261664  480959 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0319 18:36:44.264532  480959 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20544-448023/kubeconfig
	I0319 18:36:44.267447  480959 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20544-448023/.minikube
	I0319 18:36:44.270565  480959 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0319 18:36:44.273453  480959 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0319 18:36:44.276828  480959 config.go:182] Loaded profile config "functional-160492": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0319 18:36:44.277445  480959 driver.go:394] Setting default libvirt URI to qemu:///system
	I0319 18:36:44.299838  480959 docker.go:123] docker version: linux-28.0.2:Docker Engine - Community
	I0319 18:36:44.300032  480959 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0319 18:36:44.377165  480959 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-03-19 18:36:44.367815349 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
	I0319 18:36:44.377274  480959 docker.go:318] overlay module found
	I0319 18:36:44.380322  480959 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0319 18:36:44.383121  480959 start.go:297] selected driver: docker
	I0319 18:36:44.383138  480959 start.go:901] validating driver "docker" against &{Name:functional-160492 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:functional-160492 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 18:36:44.383232  480959 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0319 18:36:44.386676  480959 out.go:201] 
	W0319 18:36:44.389550  480959 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0319 18:36:44.392380  480959 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:871: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 status
functional_test.go:877: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:889: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (12.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1644: (dbg) Run:  kubectl --context functional-160492 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1652: (dbg) Run:  kubectl --context functional-160492 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-8449669db6-5j8gw" [aba45354-50dd-4dde-94bb-61e78812346b] Pending
helpers_test.go:344: "hello-node-connect-8449669db6-5j8gw" [aba45354-50dd-4dde-94bb-61e78812346b] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-8449669db6-5j8gw" [aba45354-50dd-4dde-94bb-61e78812346b] Running
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.005111265s
functional_test.go:1666: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 service hello-node-connect --url
functional_test.go:1672: found endpoint for hello-node-connect: http://192.168.49.2:30868
functional_test.go:1692: http://192.168.49.2:30868: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-8449669db6-5j8gw

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30868
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (12.72s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 addons list
functional_test.go:1719: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [4180107c-fd3a-4d2e-8368-e909bd1d9c58] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003703648s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-160492 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-160492 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-160492 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-160492 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [fd9be46f-8466-4cd5-b4fb-ba4284332f4d] Pending
helpers_test.go:344: "sp-pod" [fd9be46f-8466-4cd5-b4fb-ba4284332f4d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [fd9be46f-8466-4cd5-b4fb-ba4284332f4d] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.004018967s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-160492 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-160492 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-160492 delete -f testdata/storage-provisioner/pod.yaml: (1.041791984s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-160492 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [aba7dc72-ddcb-4741-b85d-acd8f6a76da8] Pending
helpers_test.go:344: "sp-pod" [aba7dc72-ddcb-4741-b85d-acd8f6a76da8] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003409056s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-160492 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.15s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 ssh "echo hello"
functional_test.go:1759: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 ssh -n functional-160492 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 cp functional-160492:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd740875449/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 ssh -n functional-160492 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 ssh -n functional-160492 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.35s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1946: Checking for existence of /etc/test/nested/copy/453411/hosts within VM
functional_test.go:1948: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 ssh "sudo cat /etc/test/nested/copy/453411/hosts"
functional_test.go:1953: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1989: Checking for existence of /etc/ssl/certs/453411.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 ssh "sudo cat /etc/ssl/certs/453411.pem"
functional_test.go:1989: Checking for existence of /usr/share/ca-certificates/453411.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 ssh "sudo cat /usr/share/ca-certificates/453411.pem"
functional_test.go:1989: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/4534112.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 ssh "sudo cat /etc/ssl/certs/4534112.pem"
functional_test.go:2016: Checking for existence of /usr/share/ca-certificates/4534112.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 ssh "sudo cat /usr/share/ca-certificates/4534112.pem"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.60s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:236: (dbg) Run:  kubectl --context functional-160492 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2044: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 ssh "sudo systemctl is-active docker"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-160492 ssh "sudo systemctl is-active docker": exit status 1 (345.481481ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2044: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 ssh "sudo systemctl is-active containerd"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-160492 ssh "sudo systemctl is-active containerd": exit status 1 (387.246252ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2305: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-160492 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-160492 tunnel --alsologtostderr]
E0319 18:36:12.439646  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/addons-039972/client.crt: no such file or directory" logger="UnhandledError"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-160492 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 478730: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-160492 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-160492 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-160492 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [33279eda-74ef-48f4-b21a-57bcf6b48249] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [33279eda-74ef-48f4-b21a-57bcf6b48249] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.003925193s
I0319 18:36:21.316422  453411 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.45s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-160492 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.99.33.137 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-160492 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1454: (dbg) Run:  kubectl --context functional-160492 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1462: (dbg) Run:  kubectl --context functional-160492 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64fc58db8c-kchqp" [d94a3af8-f194-4fbb-bd45-4ca7348c43ae] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64fc58db8c-kchqp" [d94a3af8-f194-4fbb-bd45-4ca7348c43ae] Running
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.003407142s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1287: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1292: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1327: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1332: Took "363.769574ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1341: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1346: Took "58.88948ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1378: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1383: Took "340.397242ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1391: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1396: Took "63.751217ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-160492 /tmp/TestFunctionalparallelMountCmdany-port3389431323/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1742409399764422129" to /tmp/TestFunctionalparallelMountCmdany-port3389431323/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1742409399764422129" to /tmp/TestFunctionalparallelMountCmdany-port3389431323/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1742409399764422129" to /tmp/TestFunctionalparallelMountCmdany-port3389431323/001/test-1742409399764422129
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-160492 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (313.521194ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0319 18:36:40.079013  453411 retry.go:31] will retry after 677.078367ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Mar 19 18:36 created-by-test
-rw-r--r-- 1 docker docker 24 Mar 19 18:36 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Mar 19 18:36 test-1742409399764422129
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 ssh cat /mount-9p/test-1742409399764422129
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-160492 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [e9f2035a-54cf-421f-a554-80bcb78a4fd8] Pending
helpers_test.go:344: "busybox-mount" [e9f2035a-54cf-421f-a554-80bcb78a4fd8] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [e9f2035a-54cf-421f-a554-80bcb78a4fd8] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [e9f2035a-54cf-421f-a554-80bcb78a4fd8] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.003261909s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-160492 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-160492 /tmp/TestFunctionalparallelMountCmdany-port3389431323/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1476: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1506: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 service list -o json
functional_test.go:1511: Took "601.800569ms" to run "out/minikube-linux-arm64 -p functional-160492 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1526: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 service --namespace=default --https --url hello-node
functional_test.go:1539: found endpoint: https://192.168.49.2:31177
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1557: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1576: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 service hello-node --url
functional_test.go:1582: found endpoint for hello-node: http://192.168.49.2:31177
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-160492 /tmp/TestFunctionalparallelMountCmdspecific-port150122699/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-160492 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (569.820034ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0319 18:36:49.815641  453411 retry.go:31] will retry after 268.127335ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-160492 /tmp/TestFunctionalparallelMountCmdspecific-port150122699/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-160492 ssh "sudo umount -f /mount-9p": exit status 1 (377.177823ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-160492 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-160492 /tmp/TestFunctionalparallelMountCmdspecific-port150122699/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.16s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-160492 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3214815335/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-160492 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3214815335/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-160492 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3214815335/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-160492 ssh "findmnt -T" /mount1: exit status 1 (1.049814414s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0319 18:36:52.466010  453411 retry.go:31] will retry after 560.417518ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-160492 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-160492 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3214815335/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-160492 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3214815335/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-160492 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3214815335/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.78s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2273: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 version --short
--- PASS: TestFunctional/parallel/Version/short (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2287: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 version -o=json --components
functional_test.go:2287: (dbg) Done: out/minikube-linux-arm64 -p functional-160492 version -o=json --components: (1.342851934s)
--- PASS: TestFunctional/parallel/Version/components (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:278: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 image ls --format short --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-arm64 -p functional-160492 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.2
registry.k8s.io/kube-proxy:v1.32.2
registry.k8s.io/kube-controller-manager:v1.32.2
registry.k8s.io/kube-apiserver:v1.32.2
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-160492
localhost/kicbase/echo-server:functional-160492
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20250214-acbabc1a
docker.io/kindest/kindnetd:v20241212-9f82dd49
functional_test.go:286: (dbg) Stderr: out/minikube-linux-arm64 -p functional-160492 image ls --format short --alsologtostderr:
I0319 18:37:03.171494  483838 out.go:345] Setting OutFile to fd 1 ...
I0319 18:37:03.171646  483838 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0319 18:37:03.171655  483838 out.go:358] Setting ErrFile to fd 2...
I0319 18:37:03.171660  483838 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0319 18:37:03.171932  483838 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20544-448023/.minikube/bin
I0319 18:37:03.172541  483838 config.go:182] Loaded profile config "functional-160492": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0319 18:37:03.172826  483838 config.go:182] Loaded profile config "functional-160492": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0319 18:37:03.173356  483838 cli_runner.go:164] Run: docker container inspect functional-160492 --format={{.State.Status}}
I0319 18:37:03.194820  483838 ssh_runner.go:195] Run: systemctl --version
I0319 18:37:03.194878  483838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-160492
I0319 18:37:03.215055  483838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/20544-448023/.minikube/machines/functional-160492/id_rsa Username:docker}
I0319 18:37:03.307107  483838 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:278: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 image ls --format table --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-arm64 -p functional-160492 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-proxy              | v1.32.2            | e5aac5df76d9b | 98.3MB |
| registry.k8s.io/kube-scheduler          | v1.32.2            | 82dfa03f692fb | 69MB   |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| localhost/kicbase/echo-server           | functional-160492  | ce2d2cda2d858 | 4.79MB |
| localhost/minikube-local-cache-test     | functional-160492  | e006e44637273 | 3.33kB |
| registry.k8s.io/etcd                    | 3.5.16-0           | 7fc9d4aa817aa | 143MB  |
| registry.k8s.io/pause                   | 3.10               | afb61768ce381 | 520kB  |
| docker.io/library/nginx                 | alpine             | cedb667e1a7b4 | 50.8MB |
| docker.io/library/nginx                 | latest             | 2c9168b3c9a84 | 201MB  |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
| docker.io/kindest/kindnetd              | v20250214-acbabc1a | ee75e27fff91c | 99MB   |
| registry.k8s.io/coredns/coredns         | v1.11.3            | 2f6c962e7b831 | 61.6MB |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
| docker.io/kindest/kindnetd              | v20241212-9f82dd49 | e1181ee320546 | 99MB   |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| registry.k8s.io/kube-apiserver          | v1.32.2            | 6417e1437b6d9 | 95MB   |
| registry.k8s.io/kube-controller-manager | v1.32.2            | 3c9285acfd2ff | 88.2MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:286: (dbg) Stderr: out/minikube-linux-arm64 -p functional-160492 image ls --format table --alsologtostderr:
I0319 18:37:04.008997  484045 out.go:345] Setting OutFile to fd 1 ...
I0319 18:37:04.009237  484045 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0319 18:37:04.009267  484045 out.go:358] Setting ErrFile to fd 2...
I0319 18:37:04.009289  484045 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0319 18:37:04.009692  484045 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20544-448023/.minikube/bin
I0319 18:37:04.011310  484045 config.go:182] Loaded profile config "functional-160492": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0319 18:37:04.011768  484045 config.go:182] Loaded profile config "functional-160492": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0319 18:37:04.016542  484045 cli_runner.go:164] Run: docker container inspect functional-160492 --format={{.State.Status}}
I0319 18:37:04.040952  484045 ssh_runner.go:195] Run: systemctl --version
I0319 18:37:04.041004  484045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-160492
I0319 18:37:04.061328  484045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/20544-448023/.minikube/machines/functional-160492/id_rsa Username:docker}
I0319 18:37:04.150664  484045 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:278: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 image ls --format json --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-arm64 -p functional-160492 image ls --format json --alsologtostderr:
[{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"e006e4463727352b6356a773bdc0aa3797207e12d9c4665baae874a0cfe38baf","repoDigests":["localhost/minikube-local-cache-test@sha256:970bb33b9489f5a67fc8e46ad7e64dee1d1cc90677c2dffd082dc0377ddeaeec"],"repoTags":["localhost/minikube-local-cache-test:functional-160492"],"size":"3330"},{"id":"2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":["regist
ry.k8s.io/coredns/coredns@sha256:31440a2bef59e2f1ffb600113b557103740ff851e27b0aef5b849f6e3ab994a6","registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61647114"},{"id":"7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82","repoDigests":["registry.k8s.io/etcd@sha256:313adb872febec3a5740969d5bf1b1df0d222f8fa06675f34db5a7fc437356a1","registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5"],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"size":"143226622"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc9
3efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:e50b7059b633caf3c1449b8da680d11845cda4506b513ee7a2de00725f0a34a7","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"519877"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@s
ha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"cedb667e1a7b4e6d843a4f74f1f2db0dac1c29b43978aa72dbae2193e3b8eea3","repoDigests":["docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591","docker.io/library/nginx@sha256:56568860b56c0bc8099fe1b2d84f43a18939e217e6c619126214c0f71bc27626"],"repoTags":["docker.io/library/nginx:alpine"],"size":"50780648"},{"id":"2c9168b3c9a84851f91e03534dc4136951e9f581ab3ac8ee38b28b49ad57ba38","repoDigests":["docker.io/library/nginx@sha256:124b44bfc9ccd1f3cedf4b592d4d1e8bddb78b51ec2ed5056c52d3692baebc19","docker.io/library/nginx@sha256:efa529649e9928104685a25f2276f3d51a08b2ed03a267e95f45a825b78547b0"],"repoTags":["docker.io/library/nginx:latest"],"size":"201397159"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echo
server-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"},{"id":"6417e1437b6d9a789e1ca789695a574e1df00a632bdbfbcae9695c9a7d500e32","repoDigests":["registry.k8s.io/kube-apiserver@sha256:22cdd0e13fe99dc2e5a3476b92895d89d81285cbe73b592033fa05b68c6c19a3","registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f"],"repoTags":["registry.k8s.io/kube-apiserver:v1.32.2"],"size":"94991840"},{"id":"e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062","repoDigests":["registry.k8s.io/kube-proxy@sha256:6b93583f4856ea0923c6fffd91c802a2362511378390acc6e539a419210ee23b","registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d"],"repoTags":["registry.k8s.io/kube-proxy:v1.32.2"],"size":"98313623"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8
017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"e1181ee320546c66f17956a302db1b7899d88a593f116726718851133de588b6","repoDigests":["docker.io/kindest/kindnetd@sha256:564b9fd29e72542e4baa14b382d4d9ee22132141caa6b9803b71faf9a4a799be","docker.io/kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26"],"repoTags":["docker.io/kindest/kindnetd:v20241212-9f82dd49"],"size":"99018802"},{"id":"ee75e27fff91c8d59835f9a3efdf968ff404e580bad69746a65bcf3e304ab26f","repoDigests":["docker.io/kindest/kindnetd@sha256:86c933f3845d6a993c8f64632752b10aae67a4756c59096b3259426e839be955","docker.io/kindest/kindnetd@sha256:e3c42406b0806c1f7e8a66838377936cbd2cdfd94d9b26a3eefedada8713d495"],"repoTags":["docker.io/kindest/kindnetd:v20250214-acbabc1a"],"size":"99018290"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d
795cb13534d4a"],"repoTags":["localhost/kicbase/echo-server:functional-160492"],"size":"4788229"},{"id":"3c9285acfd2ff7915bb451cc40ac060366ac519f3fef00c455f5aca0e0346c4d","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90","registry.k8s.io/kube-controller-manager@sha256:737052e0a84309cec4e9e3f1baaf80160273511c809893db40ab595e494a8777"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.32.2"],"size":"88241478"},{"id":"82dfa03f692fb5d84f66c17d6ee9126b081182152b25d28ea456d89b7d5d8911","repoDigests":["registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76","registry.k8s.io/kube-scheduler@sha256:a532964581fdb02b9d692589bb93db7d4b8a7bd8c120d8fb70803da0e3c83647"],"repoTags":["registry.k8s.io/kube-scheduler:v1.32.2"],"size":"68973894"}]
functional_test.go:286: (dbg) Stderr: out/minikube-linux-arm64 -p functional-160492 image ls --format json --alsologtostderr:
I0319 18:37:03.704561  483986 out.go:345] Setting OutFile to fd 1 ...
I0319 18:37:03.704714  483986 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0319 18:37:03.704720  483986 out.go:358] Setting ErrFile to fd 2...
I0319 18:37:03.704726  483986 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0319 18:37:03.705183  483986 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20544-448023/.minikube/bin
I0319 18:37:03.706776  483986 config.go:182] Loaded profile config "functional-160492": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0319 18:37:03.706960  483986 config.go:182] Loaded profile config "functional-160492": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0319 18:37:03.707708  483986 cli_runner.go:164] Run: docker container inspect functional-160492 --format={{.State.Status}}
I0319 18:37:03.750327  483986 ssh_runner.go:195] Run: systemctl --version
I0319 18:37:03.750379  483986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-160492
I0319 18:37:03.779433  483986 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/20544-448023/.minikube/machines/functional-160492/id_rsa Username:docker}
I0319 18:37:03.875993  483986 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:278: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 image ls --format yaml --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-arm64 -p functional-160492 image ls --format yaml --alsologtostderr:
- id: e006e4463727352b6356a773bdc0aa3797207e12d9c4665baae874a0cfe38baf
repoDigests:
- localhost/minikube-local-cache-test@sha256:970bb33b9489f5a67fc8e46ad7e64dee1d1cc90677c2dffd082dc0377ddeaeec
repoTags:
- localhost/minikube-local-cache-test:functional-160492
size: "3330"
- id: e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062
repoDigests:
- registry.k8s.io/kube-proxy@sha256:6b93583f4856ea0923c6fffd91c802a2362511378390acc6e539a419210ee23b
- registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d
repoTags:
- registry.k8s.io/kube-proxy:v1.32.2
size: "98313623"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:e50b7059b633caf3c1449b8da680d11845cda4506b513ee7a2de00725f0a34a7
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "519877"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- localhost/kicbase/echo-server:functional-160492
size: "4788229"
- id: 2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:31440a2bef59e2f1ffb600113b557103740ff851e27b0aef5b849f6e3ab994a6
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61647114"
- id: 82dfa03f692fb5d84f66c17d6ee9126b081182152b25d28ea456d89b7d5d8911
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76
- registry.k8s.io/kube-scheduler@sha256:a532964581fdb02b9d692589bb93db7d4b8a7bd8c120d8fb70803da0e3c83647
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.2
size: "68973894"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 2c9168b3c9a84851f91e03534dc4136951e9f581ab3ac8ee38b28b49ad57ba38
repoDigests:
- docker.io/library/nginx@sha256:124b44bfc9ccd1f3cedf4b592d4d1e8bddb78b51ec2ed5056c52d3692baebc19
- docker.io/library/nginx@sha256:efa529649e9928104685a25f2276f3d51a08b2ed03a267e95f45a825b78547b0
repoTags:
- docker.io/library/nginx:latest
size: "201397159"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82
repoDigests:
- registry.k8s.io/etcd@sha256:313adb872febec3a5740969d5bf1b1df0d222f8fa06675f34db5a7fc437356a1
- registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "143226622"
- id: cedb667e1a7b4e6d843a4f74f1f2db0dac1c29b43978aa72dbae2193e3b8eea3
repoDigests:
- docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591
- docker.io/library/nginx@sha256:56568860b56c0bc8099fe1b2d84f43a18939e217e6c619126214c0f71bc27626
repoTags:
- docker.io/library/nginx:alpine
size: "50780648"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: 6417e1437b6d9a789e1ca789695a574e1df00a632bdbfbcae9695c9a7d500e32
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:22cdd0e13fe99dc2e5a3476b92895d89d81285cbe73b592033fa05b68c6c19a3
- registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.2
size: "94991840"
- id: 3c9285acfd2ff7915bb451cc40ac060366ac519f3fef00c455f5aca0e0346c4d
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90
- registry.k8s.io/kube-controller-manager@sha256:737052e0a84309cec4e9e3f1baaf80160273511c809893db40ab595e494a8777
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.2
size: "88241478"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: e1181ee320546c66f17956a302db1b7899d88a593f116726718851133de588b6
repoDigests:
- docker.io/kindest/kindnetd@sha256:564b9fd29e72542e4baa14b382d4d9ee22132141caa6b9803b71faf9a4a799be
- docker.io/kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26
repoTags:
- docker.io/kindest/kindnetd:v20241212-9f82dd49
size: "99018802"
- id: ee75e27fff91c8d59835f9a3efdf968ff404e580bad69746a65bcf3e304ab26f
repoDigests:
- docker.io/kindest/kindnetd@sha256:86c933f3845d6a993c8f64632752b10aae67a4756c59096b3259426e839be955
- docker.io/kindest/kindnetd@sha256:e3c42406b0806c1f7e8a66838377936cbd2cdfd94d9b26a3eefedada8713d495
repoTags:
- docker.io/kindest/kindnetd:v20250214-acbabc1a
size: "99018290"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"

                                                
                                                
functional_test.go:286: (dbg) Stderr: out/minikube-linux-arm64 -p functional-160492 image ls --format yaml --alsologtostderr:
I0319 18:37:03.437871  483891 out.go:345] Setting OutFile to fd 1 ...
I0319 18:37:03.438073  483891 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0319 18:37:03.438086  483891 out.go:358] Setting ErrFile to fd 2...
I0319 18:37:03.438091  483891 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0319 18:37:03.438464  483891 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20544-448023/.minikube/bin
I0319 18:37:03.443484  483891 config.go:182] Loaded profile config "functional-160492": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0319 18:37:03.443650  483891 config.go:182] Loaded profile config "functional-160492": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0319 18:37:03.444170  483891 cli_runner.go:164] Run: docker container inspect functional-160492 --format={{.State.Status}}
I0319 18:37:03.464483  483891 ssh_runner.go:195] Run: systemctl --version
I0319 18:37:03.464539  483891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-160492
I0319 18:37:03.496024  483891 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/20544-448023/.minikube/machines/functional-160492/id_rsa Username:docker}
I0319 18:37:03.590536  483891 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 ssh pgrep buildkitd
functional_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-160492 ssh pgrep buildkitd: exit status 1 (340.024305ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:332: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 image build -t localhost/my-image:functional-160492 testdata/build --alsologtostderr
functional_test.go:332: (dbg) Done: out/minikube-linux-arm64 -p functional-160492 image build -t localhost/my-image:functional-160492 testdata/build --alsologtostderr: (3.192743284s)
functional_test.go:337: (dbg) Stdout: out/minikube-linux-arm64 -p functional-160492 image build -t localhost/my-image:functional-160492 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> aba2dcd9179
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-160492
--> dd572c0edc6
Successfully tagged localhost/my-image:functional-160492
dd572c0edc67ae24a560b8cf3ee9153dcead729a945c07614ff36abfa2474b12
functional_test.go:340: (dbg) Stderr: out/minikube-linux-arm64 -p functional-160492 image build -t localhost/my-image:functional-160492 testdata/build --alsologtostderr:
I0319 18:37:03.810640  483999 out.go:345] Setting OutFile to fd 1 ...
I0319 18:37:03.811545  483999 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0319 18:37:03.811561  483999 out.go:358] Setting ErrFile to fd 2...
I0319 18:37:03.811567  483999 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0319 18:37:03.811890  483999 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20544-448023/.minikube/bin
I0319 18:37:03.812627  483999 config.go:182] Loaded profile config "functional-160492": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0319 18:37:03.813589  483999 config.go:182] Loaded profile config "functional-160492": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0319 18:37:03.814751  483999 cli_runner.go:164] Run: docker container inspect functional-160492 --format={{.State.Status}}
I0319 18:37:03.836196  483999 ssh_runner.go:195] Run: systemctl --version
I0319 18:37:03.836249  483999 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-160492
I0319 18:37:03.857128  483999 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/20544-448023/.minikube/machines/functional-160492/id_rsa Username:docker}
I0319 18:37:03.947150  483999 build_images.go:161] Building image from path: /tmp/build.3122266310.tar
I0319 18:37:03.947231  483999 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0319 18:37:03.957603  483999 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3122266310.tar
I0319 18:37:03.961440  483999 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3122266310.tar: stat -c "%s %y" /var/lib/minikube/build/build.3122266310.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3122266310.tar': No such file or directory
I0319 18:37:03.961471  483999 ssh_runner.go:362] scp /tmp/build.3122266310.tar --> /var/lib/minikube/build/build.3122266310.tar (3072 bytes)
I0319 18:37:03.988431  483999 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3122266310
I0319 18:37:03.998940  483999 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3122266310 -xf /var/lib/minikube/build/build.3122266310.tar
I0319 18:37:04.010497  483999 crio.go:315] Building image: /var/lib/minikube/build/build.3122266310
I0319 18:37:04.010590  483999 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-160492 /var/lib/minikube/build/build.3122266310 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I0319 18:37:06.886018  483999 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-160492 /var/lib/minikube/build/build.3122266310 --cgroup-manager=cgroupfs: (2.875405869s)
I0319 18:37:06.886084  483999 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3122266310
I0319 18:37:06.894818  483999 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3122266310.tar
I0319 18:37:06.903819  483999 build_images.go:217] Built localhost/my-image:functional-160492 from /tmp/build.3122266310.tar
I0319 18:37:06.903853  483999 build_images.go:133] succeeded building to: functional-160492
I0319 18:37:06.903859  483999 build_images.go:134] failed building to: 
functional_test.go:468: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:359: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:364: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-160492
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:372: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 image load --daemon kicbase/echo-server:functional-160492 --alsologtostderr
functional_test.go:372: (dbg) Done: out/minikube-linux-arm64 -p functional-160492 image load --daemon kicbase/echo-server:functional-160492 --alsologtostderr: (1.121976972s)
functional_test.go:468: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 image load --daemon kicbase/echo-server:functional-160492 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:252: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:257: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-160492
functional_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 image load --daemon kicbase/echo-server:functional-160492 --alsologtostderr
2025/03/19 18:36:58 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:468: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:397: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 image save kicbase/echo-server:functional-160492 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 image rm kicbase/echo-server:functional-160492 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:426: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2136: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2136: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2136: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:436: (dbg) Run:  docker rmi kicbase/echo-server:functional-160492
functional_test.go:441: (dbg) Run:  out/minikube-linux-arm64 -p functional-160492 image save --daemon kicbase/echo-server:functional-160492 --alsologtostderr
functional_test.go:449: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-160492
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.63s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-160492
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:215: (dbg) Run:  docker rmi -f localhost/my-image:functional-160492
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:223: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-160492
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (181.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-034788 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0319 18:38:28.574282  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/addons-039972/client.crt: no such file or directory" logger="UnhandledError"
E0319 18:38:56.281904  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/addons-039972/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-034788 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (3m0.434163942s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-034788 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (181.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (8.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-034788 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-034788 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-034788 -- rollout status deployment/busybox: (5.713740734s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-034788 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-034788 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-034788 -- exec busybox-58667487b6-4k96p -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-034788 -- exec busybox-58667487b6-xs5sb -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-034788 -- exec busybox-58667487b6-znp7t -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-034788 -- exec busybox-58667487b6-4k96p -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-034788 -- exec busybox-58667487b6-xs5sb -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-034788 -- exec busybox-58667487b6-znp7t -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-034788 -- exec busybox-58667487b6-4k96p -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-034788 -- exec busybox-58667487b6-xs5sb -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-034788 -- exec busybox-58667487b6-znp7t -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (8.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-034788 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-034788 -- exec busybox-58667487b6-4k96p -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-034788 -- exec busybox-58667487b6-4k96p -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-034788 -- exec busybox-58667487b6-xs5sb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-034788 -- exec busybox-58667487b6-xs5sb -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-034788 -- exec busybox-58667487b6-znp7t -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-034788 -- exec busybox-58667487b6-znp7t -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (34.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-034788 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-034788 -v=7 --alsologtostderr: (33.931282388s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-034788 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (34.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-034788 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.008156194s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-034788 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-034788 cp testdata/cp-test.txt ha-034788:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-034788 ssh -n ha-034788 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-034788 cp ha-034788:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4076997027/001/cp-test_ha-034788.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-034788 ssh -n ha-034788 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-034788 cp ha-034788:/home/docker/cp-test.txt ha-034788-m02:/home/docker/cp-test_ha-034788_ha-034788-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-034788 ssh -n ha-034788 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-034788 ssh -n ha-034788-m02 "sudo cat /home/docker/cp-test_ha-034788_ha-034788-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-034788 cp ha-034788:/home/docker/cp-test.txt ha-034788-m03:/home/docker/cp-test_ha-034788_ha-034788-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-034788 ssh -n ha-034788 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-034788 ssh -n ha-034788-m03 "sudo cat /home/docker/cp-test_ha-034788_ha-034788-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-034788 cp ha-034788:/home/docker/cp-test.txt ha-034788-m04:/home/docker/cp-test_ha-034788_ha-034788-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-034788 ssh -n ha-034788 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-034788 ssh -n ha-034788-m04 "sudo cat /home/docker/cp-test_ha-034788_ha-034788-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-034788 cp testdata/cp-test.txt ha-034788-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-034788 ssh -n ha-034788-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-034788 cp ha-034788-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4076997027/001/cp-test_ha-034788-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-034788 ssh -n ha-034788-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-034788 cp ha-034788-m02:/home/docker/cp-test.txt ha-034788:/home/docker/cp-test_ha-034788-m02_ha-034788.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-034788 ssh -n ha-034788-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-034788 ssh -n ha-034788 "sudo cat /home/docker/cp-test_ha-034788-m02_ha-034788.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-034788 cp ha-034788-m02:/home/docker/cp-test.txt ha-034788-m03:/home/docker/cp-test_ha-034788-m02_ha-034788-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-034788 ssh -n ha-034788-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-034788 ssh -n ha-034788-m03 "sudo cat /home/docker/cp-test_ha-034788-m02_ha-034788-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-034788 cp ha-034788-m02:/home/docker/cp-test.txt ha-034788-m04:/home/docker/cp-test_ha-034788-m02_ha-034788-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-034788 ssh -n ha-034788-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-034788 ssh -n ha-034788-m04 "sudo cat /home/docker/cp-test_ha-034788-m02_ha-034788-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-034788 cp testdata/cp-test.txt ha-034788-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-034788 ssh -n ha-034788-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-034788 cp ha-034788-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4076997027/001/cp-test_ha-034788-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-034788 ssh -n ha-034788-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-034788 cp ha-034788-m03:/home/docker/cp-test.txt ha-034788:/home/docker/cp-test_ha-034788-m03_ha-034788.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-034788 ssh -n ha-034788-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-034788 ssh -n ha-034788 "sudo cat /home/docker/cp-test_ha-034788-m03_ha-034788.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-034788 cp ha-034788-m03:/home/docker/cp-test.txt ha-034788-m02:/home/docker/cp-test_ha-034788-m03_ha-034788-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-034788 ssh -n ha-034788-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-034788 ssh -n ha-034788-m02 "sudo cat /home/docker/cp-test_ha-034788-m03_ha-034788-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-034788 cp ha-034788-m03:/home/docker/cp-test.txt ha-034788-m04:/home/docker/cp-test_ha-034788-m03_ha-034788-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-034788 ssh -n ha-034788-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-034788 ssh -n ha-034788-m04 "sudo cat /home/docker/cp-test_ha-034788-m03_ha-034788-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-034788 cp testdata/cp-test.txt ha-034788-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-034788 ssh -n ha-034788-m04 "sudo cat /home/docker/cp-test.txt"
E0319 18:41:12.870316  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/functional-160492/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-034788 cp ha-034788-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4076997027/001/cp-test_ha-034788-m04.txt
E0319 18:41:12.877253  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/functional-160492/client.crt: no such file or directory" logger="UnhandledError"
E0319 18:41:12.888562  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/functional-160492/client.crt: no such file or directory" logger="UnhandledError"
E0319 18:41:12.909874  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/functional-160492/client.crt: no such file or directory" logger="UnhandledError"
E0319 18:41:12.952320  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/functional-160492/client.crt: no such file or directory" logger="UnhandledError"
E0319 18:41:13.033643  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/functional-160492/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-034788 ssh -n ha-034788-m04 "sudo cat /home/docker/cp-test.txt"
E0319 18:41:13.195012  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/functional-160492/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-034788 cp ha-034788-m04:/home/docker/cp-test.txt ha-034788:/home/docker/cp-test_ha-034788-m04_ha-034788.txt
E0319 18:41:13.516453  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/functional-160492/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-034788 ssh -n ha-034788-m04 "sudo cat /home/docker/cp-test.txt"
E0319 18:41:14.158406  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/functional-160492/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-034788 ssh -n ha-034788 "sudo cat /home/docker/cp-test_ha-034788-m04_ha-034788.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-034788 cp ha-034788-m04:/home/docker/cp-test.txt ha-034788-m02:/home/docker/cp-test_ha-034788-m04_ha-034788-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-034788 ssh -n ha-034788-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-034788 ssh -n ha-034788-m02 "sudo cat /home/docker/cp-test_ha-034788-m04_ha-034788-m02.txt"
E0319 18:41:15.442019  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/functional-160492/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-034788 cp ha-034788-m04:/home/docker/cp-test.txt ha-034788-m03:/home/docker/cp-test_ha-034788-m04_ha-034788-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-034788 ssh -n ha-034788-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-034788 ssh -n ha-034788-m03 "sudo cat /home/docker/cp-test_ha-034788-m04_ha-034788-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-034788 node stop m02 -v=7 --alsologtostderr
E0319 18:41:18.003585  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/functional-160492/client.crt: no such file or directory" logger="UnhandledError"
E0319 18:41:23.125191  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/functional-160492/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-034788 node stop m02 -v=7 --alsologtostderr: (12.006460758s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-034788 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-034788 status -v=7 --alsologtostderr: exit status 7 (756.211624ms)

                                                
                                                
-- stdout --
	ha-034788
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-034788-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-034788-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-034788-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0319 18:41:28.841322  499830 out.go:345] Setting OutFile to fd 1 ...
	I0319 18:41:28.841542  499830 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0319 18:41:28.841588  499830 out.go:358] Setting ErrFile to fd 2...
	I0319 18:41:28.841605  499830 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0319 18:41:28.841965  499830 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20544-448023/.minikube/bin
	I0319 18:41:28.842205  499830 out.go:352] Setting JSON to false
	I0319 18:41:28.842268  499830 mustload.go:65] Loading cluster: ha-034788
	I0319 18:41:28.842295  499830 notify.go:220] Checking for updates...
	I0319 18:41:28.842739  499830 config.go:182] Loaded profile config "ha-034788": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0319 18:41:28.842787  499830 status.go:174] checking status of ha-034788 ...
	I0319 18:41:28.843405  499830 cli_runner.go:164] Run: docker container inspect ha-034788 --format={{.State.Status}}
	I0319 18:41:28.864798  499830 status.go:371] ha-034788 host status = "Running" (err=<nil>)
	I0319 18:41:28.864827  499830 host.go:66] Checking if "ha-034788" exists ...
	I0319 18:41:28.865244  499830 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-034788
	I0319 18:41:28.896896  499830 host.go:66] Checking if "ha-034788" exists ...
	I0319 18:41:28.897278  499830 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0319 18:41:28.897328  499830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-034788
	I0319 18:41:28.918258  499830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/20544-448023/.minikube/machines/ha-034788/id_rsa Username:docker}
	I0319 18:41:29.008333  499830 ssh_runner.go:195] Run: systemctl --version
	I0319 18:41:29.012672  499830 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 18:41:29.025262  499830 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0319 18:41:29.091380  499830 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-03-19 18:41:29.0807668 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Serv
erErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
	I0319 18:41:29.091942  499830 kubeconfig.go:125] found "ha-034788" server: "https://192.168.49.254:8443"
	I0319 18:41:29.091977  499830 api_server.go:166] Checking apiserver status ...
	I0319 18:41:29.092017  499830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 18:41:29.103745  499830 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1406/cgroup
	I0319 18:41:29.113075  499830 api_server.go:182] apiserver freezer: "12:freezer:/docker/2ae951d3554ea8ed2fa5717693f6d59288a8473622689b7514cd5db071a01401/crio/crio-6759e6a53d524cc033970f37633d5b8131fbde4c2a061edbaaf1b5f9d51720d9"
	I0319 18:41:29.113142  499830 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/2ae951d3554ea8ed2fa5717693f6d59288a8473622689b7514cd5db071a01401/crio/crio-6759e6a53d524cc033970f37633d5b8131fbde4c2a061edbaaf1b5f9d51720d9/freezer.state
	I0319 18:41:29.122096  499830 api_server.go:204] freezer state: "THAWED"
	I0319 18:41:29.122127  499830 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0319 18:41:29.130281  499830 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0319 18:41:29.130315  499830 status.go:463] ha-034788 apiserver status = Running (err=<nil>)
	I0319 18:41:29.130328  499830 status.go:176] ha-034788 status: &{Name:ha-034788 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0319 18:41:29.130384  499830 status.go:174] checking status of ha-034788-m02 ...
	I0319 18:41:29.130720  499830 cli_runner.go:164] Run: docker container inspect ha-034788-m02 --format={{.State.Status}}
	I0319 18:41:29.152858  499830 status.go:371] ha-034788-m02 host status = "Stopped" (err=<nil>)
	I0319 18:41:29.152882  499830 status.go:384] host is not running, skipping remaining checks
	I0319 18:41:29.152889  499830 status.go:176] ha-034788-m02 status: &{Name:ha-034788-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0319 18:41:29.152909  499830 status.go:174] checking status of ha-034788-m03 ...
	I0319 18:41:29.153230  499830 cli_runner.go:164] Run: docker container inspect ha-034788-m03 --format={{.State.Status}}
	I0319 18:41:29.181735  499830 status.go:371] ha-034788-m03 host status = "Running" (err=<nil>)
	I0319 18:41:29.181759  499830 host.go:66] Checking if "ha-034788-m03" exists ...
	I0319 18:41:29.182159  499830 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-034788-m03
	I0319 18:41:29.202777  499830 host.go:66] Checking if "ha-034788-m03" exists ...
	I0319 18:41:29.203106  499830 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0319 18:41:29.203151  499830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-034788-m03
	I0319 18:41:29.221370  499830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/20544-448023/.minikube/machines/ha-034788-m03/id_rsa Username:docker}
	I0319 18:41:29.314848  499830 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 18:41:29.329357  499830 kubeconfig.go:125] found "ha-034788" server: "https://192.168.49.254:8443"
	I0319 18:41:29.329393  499830 api_server.go:166] Checking apiserver status ...
	I0319 18:41:29.329454  499830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 18:41:29.343016  499830 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1341/cgroup
	I0319 18:41:29.352633  499830 api_server.go:182] apiserver freezer: "12:freezer:/docker/53acf8994a1d932f8ccaa1a73d7532e188974b960d65f77032e6ab20defceb60/crio/crio-add62ae80fd374405e06b48ca78dc7a5361fe6b0c3cf2122c8ba45a4746aad99"
	I0319 18:41:29.352712  499830 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/53acf8994a1d932f8ccaa1a73d7532e188974b960d65f77032e6ab20defceb60/crio/crio-add62ae80fd374405e06b48ca78dc7a5361fe6b0c3cf2122c8ba45a4746aad99/freezer.state
	I0319 18:41:29.362840  499830 api_server.go:204] freezer state: "THAWED"
	I0319 18:41:29.362922  499830 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0319 18:41:29.370592  499830 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0319 18:41:29.370621  499830 status.go:463] ha-034788-m03 apiserver status = Running (err=<nil>)
	I0319 18:41:29.370631  499830 status.go:176] ha-034788-m03 status: &{Name:ha-034788-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0319 18:41:29.370647  499830 status.go:174] checking status of ha-034788-m04 ...
	I0319 18:41:29.370963  499830 cli_runner.go:164] Run: docker container inspect ha-034788-m04 --format={{.State.Status}}
	I0319 18:41:29.388102  499830 status.go:371] ha-034788-m04 host status = "Running" (err=<nil>)
	I0319 18:41:29.388123  499830 host.go:66] Checking if "ha-034788-m04" exists ...
	I0319 18:41:29.388605  499830 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-034788-m04
	I0319 18:41:29.407440  499830 host.go:66] Checking if "ha-034788-m04" exists ...
	I0319 18:41:29.407738  499830 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0319 18:41:29.407783  499830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-034788-m04
	I0319 18:41:29.426237  499830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33193 SSHKeyPath:/home/jenkins/minikube-integration/20544-448023/.minikube/machines/ha-034788-m04/id_rsa Username:docker}
	I0319 18:41:29.514890  499830 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 18:41:29.530675  499830 status.go:176] ha-034788-m04 status: &{Name:ha-034788-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (23.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-034788 node start m02 -v=7 --alsologtostderr
E0319 18:41:33.366578  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/functional-160492/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-034788 node start m02 -v=7 --alsologtostderr: (21.485439242s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-034788 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-034788 status -v=7 --alsologtostderr: (1.599598092s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (23.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
E0319 18:41:53.848493  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/functional-160492/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.419109653s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (204.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-034788 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-034788 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 stop -p ha-034788 -v=7 --alsologtostderr: (37.267223281s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 start -p ha-034788 --wait=true -v=7 --alsologtostderr
E0319 18:42:34.810419  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/functional-160492/client.crt: no such file or directory" logger="UnhandledError"
E0319 18:43:28.574366  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/addons-039972/client.crt: no such file or directory" logger="UnhandledError"
E0319 18:43:56.732737  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/functional-160492/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 start -p ha-034788 --wait=true -v=7 --alsologtostderr: (2m47.494178744s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-034788
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (204.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-034788 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-034788 node delete m03 -v=7 --alsologtostderr: (11.55020333s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-034788 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-034788 stop -v=7 --alsologtostderr
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-034788 stop -v=7 --alsologtostderr: (35.744397213s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-034788 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-034788 status -v=7 --alsologtostderr: exit status 7 (117.464707ms)

                                                
                                                
-- stdout --
	ha-034788
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-034788-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-034788-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0319 18:46:08.992943  514437 out.go:345] Setting OutFile to fd 1 ...
	I0319 18:46:08.993076  514437 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0319 18:46:08.993086  514437 out.go:358] Setting ErrFile to fd 2...
	I0319 18:46:08.993091  514437 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0319 18:46:08.993340  514437 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20544-448023/.minikube/bin
	I0319 18:46:08.993551  514437 out.go:352] Setting JSON to false
	I0319 18:46:08.993596  514437 mustload.go:65] Loading cluster: ha-034788
	I0319 18:46:08.993643  514437 notify.go:220] Checking for updates...
	I0319 18:46:08.994061  514437 config.go:182] Loaded profile config "ha-034788": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0319 18:46:08.994087  514437 status.go:174] checking status of ha-034788 ...
	I0319 18:46:08.994637  514437 cli_runner.go:164] Run: docker container inspect ha-034788 --format={{.State.Status}}
	I0319 18:46:09.016630  514437 status.go:371] ha-034788 host status = "Stopped" (err=<nil>)
	I0319 18:46:09.016656  514437 status.go:384] host is not running, skipping remaining checks
	I0319 18:46:09.016663  514437 status.go:176] ha-034788 status: &{Name:ha-034788 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0319 18:46:09.016695  514437 status.go:174] checking status of ha-034788-m02 ...
	I0319 18:46:09.017023  514437 cli_runner.go:164] Run: docker container inspect ha-034788-m02 --format={{.State.Status}}
	I0319 18:46:09.037516  514437 status.go:371] ha-034788-m02 host status = "Stopped" (err=<nil>)
	I0319 18:46:09.037542  514437 status.go:384] host is not running, skipping remaining checks
	I0319 18:46:09.037548  514437 status.go:176] ha-034788-m02 status: &{Name:ha-034788-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0319 18:46:09.037570  514437 status.go:174] checking status of ha-034788-m04 ...
	I0319 18:46:09.037908  514437 cli_runner.go:164] Run: docker container inspect ha-034788-m04 --format={{.State.Status}}
	I0319 18:46:09.059536  514437 status.go:371] ha-034788-m04 host status = "Stopped" (err=<nil>)
	I0319 18:46:09.059557  514437 status.go:384] host is not running, skipping remaining checks
	I0319 18:46:09.059565  514437 status.go:176] ha-034788-m04 status: &{Name:ha-034788-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (96.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 start -p ha-034788 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0319 18:46:12.869923  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/functional-160492/client.crt: no such file or directory" logger="UnhandledError"
E0319 18:46:40.574784  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/functional-160492/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 start -p ha-034788 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (1m35.201619877s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-034788 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (96.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (72.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-034788 --control-plane -v=7 --alsologtostderr
E0319 18:48:28.574009  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/addons-039972/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 node add -p ha-034788 --control-plane -v=7 --alsologtostderr: (1m11.755791436s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-034788 status -v=7 --alsologtostderr
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-034788 status -v=7 --alsologtostderr: (1.000634864s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (72.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.99s)

                                                
                                    
x
+
TestJSONOutput/start/Command (48.64s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-901909 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E0319 18:49:51.645675  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/addons-039972/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-901909 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (48.632614849s)
--- PASS: TestJSONOutput/start/Command (48.64s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.73s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-901909 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.73s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-901909 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.84s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-901909 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-901909 --output=json --user=testUser: (5.836992688s)
--- PASS: TestJSONOutput/stop/Command (5.84s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-431486 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-431486 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (103.113728ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4b5ba5cd-7599-4e9b-80f9-fc6620346af5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-431486] minikube v1.35.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"744ca95a-33f1-41d7-ae48-d17c7884b353","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20544"}}
	{"specversion":"1.0","id":"3f68c0c1-6b3f-4268-a916-4502510bd907","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a0063314-1d7f-4c95-9ccd-6579b08d5495","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20544-448023/kubeconfig"}}
	{"specversion":"1.0","id":"108f9f09-97e8-43ef-a6d4-bbbfaf8ec057","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20544-448023/.minikube"}}
	{"specversion":"1.0","id":"feac8618-89ad-4d7e-b9c9-1fb351fd6261","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"28b2733e-d678-417b-811c-1ca46c606464","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"740d2211-f616-4903-b9bc-3b81123ddb6b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-431486" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-431486
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (39.5s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-408073 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-408073 --network=: (37.395462995s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-408073" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-408073
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-408073: (2.074125868s)
--- PASS: TestKicCustomNetwork/create_custom_network (39.50s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (32.71s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-475088 --network=bridge
E0319 18:51:12.871155  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/functional-160492/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-475088 --network=bridge: (30.639130766s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-475088" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-475088
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-475088: (2.050021776s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (32.71s)

                                                
                                    
x
+
TestKicExistingNetwork (32.96s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0319 18:51:21.678577  453411 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0319 18:51:21.697569  453411 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0319 18:51:21.698322  453411 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0319 18:51:21.699047  453411 cli_runner.go:164] Run: docker network inspect existing-network
W0319 18:51:21.717017  453411 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0319 18:51:21.717046  453411 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0319 18:51:21.717062  453411 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0319 18:51:21.717246  453411 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0319 18:51:21.733868  453411 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-91657e88bd0e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:9e:fc:67:1b:9b:c7} reservation:<nil>}
I0319 18:51:21.737631  453411 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
I0319 18:51:21.738098  453411 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001767c70}
I0319 18:51:21.738590  453411 network_create.go:124] attempt to create docker network existing-network 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
I0319 18:51:21.738654  453411 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0319 18:51:21.795812  453411 network_create.go:108] docker network existing-network 192.168.67.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-832967 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-832967 --network=existing-network: (31.171669355s)
helpers_test.go:175: Cleaning up "existing-network-832967" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-832967
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-832967: (1.637673675s)
I0319 18:51:54.622228  453411 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (32.96s)

                                                
                                    
x
+
TestKicCustomSubnet (31.38s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-865671 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-865671 --subnet=192.168.60.0/24: (29.277399373s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-865671 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-865671" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-865671
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-865671: (2.082755025s)
--- PASS: TestKicCustomSubnet (31.38s)

                                                
                                    
x
+
TestKicStaticIP (34.42s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-481335 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-481335 --static-ip=192.168.200.200: (32.000002549s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-481335 ip
helpers_test.go:175: Cleaning up "static-ip-481335" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-481335
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-481335: (2.25421315s)
--- PASS: TestKicStaticIP (34.42s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (69.08s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-979611 --driver=docker  --container-runtime=crio
E0319 18:53:28.574009  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/addons-039972/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-979611 --driver=docker  --container-runtime=crio: (31.580787547s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-982549 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-982549 --driver=docker  --container-runtime=crio: (31.872492984s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-979611
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-982549
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-982549" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-982549
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-982549: (1.980424751s)
helpers_test.go:175: Cleaning up "first-979611" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-979611
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-979611: (2.322880415s)
--- PASS: TestMinikubeProfile (69.08s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.45s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-769630 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-769630 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.450965986s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.45s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-769630 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.2s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-771279 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-771279 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.197955848s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.20s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-771279 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-769630 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-769630 --alsologtostderr -v=5: (1.622379743s)
--- PASS: TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-771279 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-771279
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-771279: (1.219088835s)
--- PASS: TestMountStart/serial/Stop (1.22s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.82s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-771279
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-771279: (6.820628138s)
--- PASS: TestMountStart/serial/RestartStopped (7.82s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-771279 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (82.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-910920 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-910920 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m21.612514304s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910920 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (82.17s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (7.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-910920 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-910920 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-910920 -- rollout status deployment/busybox: (5.549280146s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-910920 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-910920 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-910920 -- exec busybox-58667487b6-dd6tj -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-910920 -- exec busybox-58667487b6-v97wp -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-910920 -- exec busybox-58667487b6-dd6tj -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-910920 -- exec busybox-58667487b6-v97wp -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-910920 -- exec busybox-58667487b6-dd6tj -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-910920 -- exec busybox-58667487b6-v97wp -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (7.45s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-910920 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-910920 -- exec busybox-58667487b6-dd6tj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-910920 -- exec busybox-58667487b6-dd6tj -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-910920 -- exec busybox-58667487b6-v97wp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-910920 -- exec busybox-58667487b6-v97wp -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.01s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (30.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-910920 -v 3 --alsologtostderr
E0319 18:56:12.869960  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/functional-160492/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-910920 -v 3 --alsologtostderr: (29.741883447s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910920 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (30.38s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-910920 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.67s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910920 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910920 cp testdata/cp-test.txt multinode-910920:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910920 ssh -n multinode-910920 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910920 cp multinode-910920:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3353182507/001/cp-test_multinode-910920.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910920 ssh -n multinode-910920 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910920 cp multinode-910920:/home/docker/cp-test.txt multinode-910920-m02:/home/docker/cp-test_multinode-910920_multinode-910920-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910920 ssh -n multinode-910920 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910920 ssh -n multinode-910920-m02 "sudo cat /home/docker/cp-test_multinode-910920_multinode-910920-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910920 cp multinode-910920:/home/docker/cp-test.txt multinode-910920-m03:/home/docker/cp-test_multinode-910920_multinode-910920-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910920 ssh -n multinode-910920 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910920 ssh -n multinode-910920-m03 "sudo cat /home/docker/cp-test_multinode-910920_multinode-910920-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910920 cp testdata/cp-test.txt multinode-910920-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910920 ssh -n multinode-910920-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910920 cp multinode-910920-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3353182507/001/cp-test_multinode-910920-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910920 ssh -n multinode-910920-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910920 cp multinode-910920-m02:/home/docker/cp-test.txt multinode-910920:/home/docker/cp-test_multinode-910920-m02_multinode-910920.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910920 ssh -n multinode-910920-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910920 ssh -n multinode-910920 "sudo cat /home/docker/cp-test_multinode-910920-m02_multinode-910920.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910920 cp multinode-910920-m02:/home/docker/cp-test.txt multinode-910920-m03:/home/docker/cp-test_multinode-910920-m02_multinode-910920-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910920 ssh -n multinode-910920-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910920 ssh -n multinode-910920-m03 "sudo cat /home/docker/cp-test_multinode-910920-m02_multinode-910920-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910920 cp testdata/cp-test.txt multinode-910920-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910920 ssh -n multinode-910920-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910920 cp multinode-910920-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3353182507/001/cp-test_multinode-910920-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910920 ssh -n multinode-910920-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910920 cp multinode-910920-m03:/home/docker/cp-test.txt multinode-910920:/home/docker/cp-test_multinode-910920-m03_multinode-910920.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910920 ssh -n multinode-910920-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910920 ssh -n multinode-910920 "sudo cat /home/docker/cp-test_multinode-910920-m03_multinode-910920.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910920 cp multinode-910920-m03:/home/docker/cp-test.txt multinode-910920-m02:/home/docker/cp-test_multinode-910920-m03_multinode-910920-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910920 ssh -n multinode-910920-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910920 ssh -n multinode-910920-m02 "sudo cat /home/docker/cp-test_multinode-910920-m03_multinode-910920-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.13s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910920 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-910920 node stop m03: (1.203691915s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910920 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-910920 status: exit status 7 (501.962276ms)

                                                
                                                
-- stdout --
	multinode-910920
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-910920-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-910920-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910920 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-910920 status --alsologtostderr: exit status 7 (513.407178ms)

                                                
                                                
-- stdout --
	multinode-910920
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-910920-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-910920-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0319 18:56:52.421349  568304 out.go:345] Setting OutFile to fd 1 ...
	I0319 18:56:52.421491  568304 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0319 18:56:52.421503  568304 out.go:358] Setting ErrFile to fd 2...
	I0319 18:56:52.421509  568304 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0319 18:56:52.421943  568304 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20544-448023/.minikube/bin
	I0319 18:56:52.422328  568304 out.go:352] Setting JSON to false
	I0319 18:56:52.422362  568304 mustload.go:65] Loading cluster: multinode-910920
	I0319 18:56:52.422810  568304 config.go:182] Loaded profile config "multinode-910920": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0319 18:56:52.422834  568304 status.go:174] checking status of multinode-910920 ...
	I0319 18:56:52.423316  568304 cli_runner.go:164] Run: docker container inspect multinode-910920 --format={{.State.Status}}
	I0319 18:56:52.423853  568304 notify.go:220] Checking for updates...
	I0319 18:56:52.444266  568304 status.go:371] multinode-910920 host status = "Running" (err=<nil>)
	I0319 18:56:52.444297  568304 host.go:66] Checking if "multinode-910920" exists ...
	I0319 18:56:52.444614  568304 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-910920
	I0319 18:56:52.471593  568304 host.go:66] Checking if "multinode-910920" exists ...
	I0319 18:56:52.471901  568304 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0319 18:56:52.471958  568304 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-910920
	I0319 18:56:52.494279  568304 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33298 SSHKeyPath:/home/jenkins/minikube-integration/20544-448023/.minikube/machines/multinode-910920/id_rsa Username:docker}
	I0319 18:56:52.584195  568304 ssh_runner.go:195] Run: systemctl --version
	I0319 18:56:52.589328  568304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 18:56:52.600936  568304 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0319 18:56:52.662491  568304 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-03-19 18:56:52.652374547 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
	I0319 18:56:52.663178  568304 kubeconfig.go:125] found "multinode-910920" server: "https://192.168.58.2:8443"
	I0319 18:56:52.663214  568304 api_server.go:166] Checking apiserver status ...
	I0319 18:56:52.663272  568304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 18:56:52.673915  568304 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1412/cgroup
	I0319 18:56:52.683061  568304 api_server.go:182] apiserver freezer: "12:freezer:/docker/cdfdbabf9f3504b62a196fa639d650169a1b9c71aad1c34b4e8de3d20fa50232/crio/crio-e894793fde864e28be757a393318c016efcf65738a8c77662888012b9e4b9b59"
	I0319 18:56:52.683131  568304 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/cdfdbabf9f3504b62a196fa639d650169a1b9c71aad1c34b4e8de3d20fa50232/crio/crio-e894793fde864e28be757a393318c016efcf65738a8c77662888012b9e4b9b59/freezer.state
	I0319 18:56:52.691771  568304 api_server.go:204] freezer state: "THAWED"
	I0319 18:56:52.691800  568304 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0319 18:56:52.699762  568304 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0319 18:56:52.699788  568304 status.go:463] multinode-910920 apiserver status = Running (err=<nil>)
	I0319 18:56:52.699799  568304 status.go:176] multinode-910920 status: &{Name:multinode-910920 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0319 18:56:52.699815  568304 status.go:174] checking status of multinode-910920-m02 ...
	I0319 18:56:52.700130  568304 cli_runner.go:164] Run: docker container inspect multinode-910920-m02 --format={{.State.Status}}
	I0319 18:56:52.718568  568304 status.go:371] multinode-910920-m02 host status = "Running" (err=<nil>)
	I0319 18:56:52.718599  568304 host.go:66] Checking if "multinode-910920-m02" exists ...
	I0319 18:56:52.718913  568304 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-910920-m02
	I0319 18:56:52.736743  568304 host.go:66] Checking if "multinode-910920-m02" exists ...
	I0319 18:56:52.737063  568304 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0319 18:56:52.737115  568304 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-910920-m02
	I0319 18:56:52.754479  568304 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33303 SSHKeyPath:/home/jenkins/minikube-integration/20544-448023/.minikube/machines/multinode-910920-m02/id_rsa Username:docker}
	I0319 18:56:52.842990  568304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 18:56:52.854494  568304 status.go:176] multinode-910920-m02 status: &{Name:multinode-910920-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0319 18:56:52.854529  568304 status.go:174] checking status of multinode-910920-m03 ...
	I0319 18:56:52.854876  568304 cli_runner.go:164] Run: docker container inspect multinode-910920-m03 --format={{.State.Status}}
	I0319 18:56:52.871689  568304 status.go:371] multinode-910920-m03 host status = "Stopped" (err=<nil>)
	I0319 18:56:52.871713  568304 status.go:384] host is not running, skipping remaining checks
	I0319 18:56:52.871720  568304 status.go:176] multinode-910920-m03 status: &{Name:multinode-910920-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.22s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910920 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-910920 node start m03 -v=7 --alsologtostderr: (9.5446196s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910920 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.33s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (82.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-910920
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-910920
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-910920: (24.770465359s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-910920 --wait=true -v=8 --alsologtostderr
E0319 18:57:35.936095  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/functional-160492/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-910920 --wait=true -v=8 --alsologtostderr: (57.424551571s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-910920
--- PASS: TestMultiNode/serial/RestartKeepsNodes (82.32s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910920 node delete m03
E0319 18:58:28.573808  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/addons-039972/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-910920 node delete m03: (4.626450896s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910920 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.30s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910920 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-910920 stop: (23.667651742s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910920 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-910920 status: exit status 7 (94.784458ms)

                                                
                                                
-- stdout --
	multinode-910920
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-910920-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910920 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-910920 status --alsologtostderr: exit status 7 (91.874366ms)

                                                
                                                
-- stdout --
	multinode-910920
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-910920-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0319 18:58:54.644262  575885 out.go:345] Setting OutFile to fd 1 ...
	I0319 18:58:54.644444  575885 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0319 18:58:54.644474  575885 out.go:358] Setting ErrFile to fd 2...
	I0319 18:58:54.644494  575885 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0319 18:58:54.644752  575885 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20544-448023/.minikube/bin
	I0319 18:58:54.644976  575885 out.go:352] Setting JSON to false
	I0319 18:58:54.645036  575885 mustload.go:65] Loading cluster: multinode-910920
	I0319 18:58:54.645490  575885 config.go:182] Loaded profile config "multinode-910920": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0319 18:58:54.645555  575885 status.go:174] checking status of multinode-910920 ...
	I0319 18:58:54.645074  575885 notify.go:220] Checking for updates...
	I0319 18:58:54.646728  575885 cli_runner.go:164] Run: docker container inspect multinode-910920 --format={{.State.Status}}
	I0319 18:58:54.665019  575885 status.go:371] multinode-910920 host status = "Stopped" (err=<nil>)
	I0319 18:58:54.665040  575885 status.go:384] host is not running, skipping remaining checks
	I0319 18:58:54.665046  575885 status.go:176] multinode-910920 status: &{Name:multinode-910920 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0319 18:58:54.665078  575885 status.go:174] checking status of multinode-910920-m02 ...
	I0319 18:58:54.665379  575885 cli_runner.go:164] Run: docker container inspect multinode-910920-m02 --format={{.State.Status}}
	I0319 18:58:54.687450  575885 status.go:371] multinode-910920-m02 host status = "Stopped" (err=<nil>)
	I0319 18:58:54.687474  575885 status.go:384] host is not running, skipping remaining checks
	I0319 18:58:54.687482  575885 status.go:176] multinode-910920-m02 status: &{Name:multinode-910920-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.85s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (57.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-910920 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-910920 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (56.591912522s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-910920 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (57.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (33.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-910920
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-910920-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-910920-m02 --driver=docker  --container-runtime=crio: exit status 14 (110.823159ms)

                                                
                                                
-- stdout --
	* [multinode-910920-m02] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20544
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20544-448023/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20544-448023/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-910920-m02' is duplicated with machine name 'multinode-910920-m02' in profile 'multinode-910920'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-910920-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-910920-m03 --driver=docker  --container-runtime=crio: (31.057791325s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-910920
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-910920: exit status 80 (332.814326ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-910920 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-910920-m03 already exists in multinode-910920-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-910920-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-910920-m03: (1.997321481s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (33.55s)

                                                
                                    
x
+
TestPreload (129.55s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-617896 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E0319 19:01:12.873644  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/functional-160492/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-617896 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m36.68924885s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-617896 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-617896 image pull gcr.io/k8s-minikube/busybox: (3.272271823s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-617896
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-617896: (5.849124733s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-617896 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-617896 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (21.050008558s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-617896 image list
helpers_test.go:175: Cleaning up "test-preload-617896" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-617896
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-617896: (2.405352824s)
--- PASS: TestPreload (129.55s)

                                                
                                    
x
+
TestScheduledStopUnix (110.94s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-769151 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-769151 --memory=2048 --driver=docker  --container-runtime=crio: (34.485608946s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-769151 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-769151 -n scheduled-stop-769151
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-769151 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0319 19:03:14.256897  453411 retry.go:31] will retry after 74.491µs: open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/scheduled-stop-769151/pid: no such file or directory
I0319 19:03:14.257356  453411 retry.go:31] will retry after 115.278µs: open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/scheduled-stop-769151/pid: no such file or directory
I0319 19:03:14.259778  453411 retry.go:31] will retry after 265.548µs: open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/scheduled-stop-769151/pid: no such file or directory
I0319 19:03:14.261062  453411 retry.go:31] will retry after 461.845µs: open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/scheduled-stop-769151/pid: no such file or directory
I0319 19:03:14.262156  453411 retry.go:31] will retry after 530.356µs: open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/scheduled-stop-769151/pid: no such file or directory
I0319 19:03:14.270070  453411 retry.go:31] will retry after 505.976µs: open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/scheduled-stop-769151/pid: no such file or directory
I0319 19:03:14.272434  453411 retry.go:31] will retry after 842.514µs: open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/scheduled-stop-769151/pid: no such file or directory
I0319 19:03:14.273580  453411 retry.go:31] will retry after 947.854µs: open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/scheduled-stop-769151/pid: no such file or directory
I0319 19:03:14.274717  453411 retry.go:31] will retry after 1.863514ms: open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/scheduled-stop-769151/pid: no such file or directory
I0319 19:03:14.276925  453411 retry.go:31] will retry after 4.318508ms: open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/scheduled-stop-769151/pid: no such file or directory
I0319 19:03:14.282131  453411 retry.go:31] will retry after 3.060388ms: open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/scheduled-stop-769151/pid: no such file or directory
I0319 19:03:14.285279  453411 retry.go:31] will retry after 12.387373ms: open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/scheduled-stop-769151/pid: no such file or directory
I0319 19:03:14.298504  453411 retry.go:31] will retry after 7.524319ms: open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/scheduled-stop-769151/pid: no such file or directory
I0319 19:03:14.306743  453411 retry.go:31] will retry after 23.32669ms: open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/scheduled-stop-769151/pid: no such file or directory
I0319 19:03:14.330993  453411 retry.go:31] will retry after 42.210684ms: open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/scheduled-stop-769151/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-769151 --cancel-scheduled
E0319 19:03:28.574740  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/addons-039972/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-769151 -n scheduled-stop-769151
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-769151
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-769151 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-769151
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-769151: exit status 7 (74.181095ms)

                                                
                                                
-- stdout --
	scheduled-stop-769151
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-769151 -n scheduled-stop-769151
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-769151 -n scheduled-stop-769151: exit status 7 (75.32674ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-769151" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-769151
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-769151: (4.89390167s)
--- PASS: TestScheduledStopUnix (110.94s)

                                                
                                    
x
+
TestInsufficientStorage (13.19s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-026764 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-026764 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.685604067s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"31308fe0-3260-4f84-95a1-ee2206dd26f4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-026764] minikube v1.35.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c9c68158-bf2f-4f6f-9404-77f4175e4f2e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20544"}}
	{"specversion":"1.0","id":"14d2b7a1-4c8c-43a4-be17-fea3ca9a8fae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"67054181-0778-4371-aeb2-d9c02d516b48","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20544-448023/kubeconfig"}}
	{"specversion":"1.0","id":"609d267c-bacf-48b8-8be6-eff22fac7959","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20544-448023/.minikube"}}
	{"specversion":"1.0","id":"7cab2fc9-7d2f-46eb-8d99-2c9baa4778ab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"f87c1aa1-6b60-4e25-9d50-40bec295daae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"88abe796-1480-4f23-93bc-f091c39c762b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"72c362fe-7ede-446e-bb8c-c0eaf06a634d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"3dbe19c8-517a-4347-b2eb-39f526659cf2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"416849b7-8a76-4898-b809-ae46530142ae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"c8a06fa8-ee94-4273-a0b7-420c846dcb29","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-026764\" primary control-plane node in \"insufficient-storage-026764\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"0e8b4d87-4d63-4afd-b6d1-d0d9fc871869","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.46-1741860993-20523 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"3bc59818-18e5-4579-abfd-a224848f1fef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"5da93e5d-da2a-4359-b52a-89231ca0890c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-026764 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-026764 --output=json --layout=cluster: exit status 7 (280.020148ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-026764","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-026764","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0319 19:04:41.157054  593690 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-026764" does not appear in /home/jenkins/minikube-integration/20544-448023/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-026764 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-026764 --output=json --layout=cluster: exit status 7 (297.005627ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-026764","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-026764","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0319 19:04:41.451572  593752 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-026764" does not appear in /home/jenkins/minikube-integration/20544-448023/kubeconfig
	E0319 19:04:41.461771  593752 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/insufficient-storage-026764/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-026764" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-026764
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-026764: (1.929310971s)
--- PASS: TestInsufficientStorage (13.19s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (70.84s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3894832787 start -p running-upgrade-316965 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3894832787 start -p running-upgrade-316965 --memory=2200 --vm-driver=docker  --container-runtime=crio: (38.436998978s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-316965 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-316965 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (27.640494367s)
helpers_test.go:175: Cleaning up "running-upgrade-316965" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-316965
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-316965: (3.065099999s)
--- PASS: TestRunningBinaryUpgrade (70.84s)

                                                
                                    
x
+
TestKubernetesUpgrade (240.67s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-414206 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-414206 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m7.764376879s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-414206
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-414206: (1.285359944s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-414206 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-414206 status --format={{.Host}}: exit status 7 (98.7407ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-414206 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-414206 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (2m10.52420234s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-414206 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-414206 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-414206 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (146.930939ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-414206] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20544
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20544-448023/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20544-448023/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-414206
	    minikube start -p kubernetes-upgrade-414206 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4142062 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.2, by running:
	    
	    minikube start -p kubernetes-upgrade-414206 --kubernetes-version=v1.32.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-414206 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-414206 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (38.184978066s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-414206" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-414206
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-414206: (2.505091944s)
--- PASS: TestKubernetesUpgrade (240.67s)

                                                
                                    
x
+
TestMissingContainerUpgrade (165.16s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.1683893777 start -p missing-upgrade-650445 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.1683893777 start -p missing-upgrade-650445 --memory=2200 --driver=docker  --container-runtime=crio: (1m28.136156261s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-650445
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-650445: (11.378851797s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-650445
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-650445 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0319 19:06:31.647102  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/addons-039972/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-650445 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m2.777243547s)
helpers_test.go:175: Cleaning up "missing-upgrade-650445" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-650445
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-650445: (2.115210544s)
--- PASS: TestMissingContainerUpgrade (165.16s)

                                                
                                    
x
+
TestPause/serial/Start (55.9s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-844830 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-844830 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (55.887765047s)
--- PASS: TestPause/serial/Start (55.90s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (26.68s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-844830 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-844830 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (26.658422469s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (26.68s)

                                                
                                    
x
+
TestPause/serial/Pause (0.88s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-844830 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.88s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.37s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-844830 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-844830 --output=json --layout=cluster: exit status 2 (370.260105ms)

                                                
                                                
-- stdout --
	{"Name":"pause-844830","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-844830","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.37s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.86s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-844830 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.86s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.26s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-844830 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-844830 --alsologtostderr -v=5: (1.256699596s)
--- PASS: TestPause/serial/PauseAgain (1.26s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.22s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-844830 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-844830 --alsologtostderr -v=5: (3.217175132s)
--- PASS: TestPause/serial/DeletePaused (3.22s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (3.45s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
E0319 19:06:12.875593  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/functional-160492/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:142: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (3.386044291s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-844830
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-844830: exit status 1 (17.178891ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-844830: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (3.45s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.84s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.84s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (75.06s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.648918203 start -p stopped-upgrade-928854 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.648918203 start -p stopped-upgrade-928854 --memory=2200 --vm-driver=docker  --container-runtime=crio: (42.394359567s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.648918203 -p stopped-upgrade-928854 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.648918203 -p stopped-upgrade-928854 stop: (2.628418753s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-928854 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0319 19:08:28.574347  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/addons-039972/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-928854 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (30.032444224s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (75.06s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.23s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-928854
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-928854: (1.225257835s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-611619 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-611619 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (109.221816ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-611619] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20544
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20544-448023/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20544-448023/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (42.82s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-611619 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-611619 --driver=docker  --container-runtime=crio: (42.247091269s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-611619 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (42.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-766848 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-766848 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (304.775917ms)

                                                
                                                
-- stdout --
	* [false-766848] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20544
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20544-448023/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20544-448023/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0319 19:10:49.068007  624917 out.go:345] Setting OutFile to fd 1 ...
	I0319 19:10:49.068218  624917 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0319 19:10:49.068246  624917 out.go:358] Setting ErrFile to fd 2...
	I0319 19:10:49.068265  624917 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0319 19:10:49.068536  624917 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20544-448023/.minikube/bin
	I0319 19:10:49.068983  624917 out.go:352] Setting JSON to false
	I0319 19:10:49.070055  624917 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":10383,"bootTime":1742401066,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1077-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0319 19:10:49.070152  624917 start.go:139] virtualization:  
	I0319 19:10:49.075602  624917 out.go:177] * [false-766848] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0319 19:10:49.078535  624917 out.go:177]   - MINIKUBE_LOCATION=20544
	I0319 19:10:49.078738  624917 notify.go:220] Checking for updates...
	I0319 19:10:49.084637  624917 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0319 19:10:49.087419  624917 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20544-448023/kubeconfig
	I0319 19:10:49.090219  624917 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20544-448023/.minikube
	I0319 19:10:49.093071  624917 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0319 19:10:49.095898  624917 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0319 19:10:49.099180  624917 config.go:182] Loaded profile config "NoKubernetes-611619": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0319 19:10:49.099283  624917 driver.go:394] Setting default libvirt URI to qemu:///system
	I0319 19:10:49.147650  624917 docker.go:123] docker version: linux-28.0.2:Docker Engine - Community
	I0319 19:10:49.147784  624917 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0319 19:10:49.269595  624917 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-03-19 19:10:49.26063646 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Ser
verErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
	I0319 19:10:49.269694  624917 docker.go:318] overlay module found
	I0319 19:10:49.272733  624917 out.go:177] * Using the docker driver based on user configuration
	I0319 19:10:49.275501  624917 start.go:297] selected driver: docker
	I0319 19:10:49.275515  624917 start.go:901] validating driver "docker" against <nil>
	I0319 19:10:49.275528  624917 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0319 19:10:49.278912  624917 out.go:201] 
	W0319 19:10:49.281728  624917 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0319 19:10:49.284517  624917 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-766848 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-766848

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-766848

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-766848

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-766848

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-766848

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-766848

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-766848

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-766848

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-766848

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-766848

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-766848"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-766848"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-766848"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-766848

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-766848"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-766848"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-766848" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-766848" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-766848" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-766848" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-766848" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-766848" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-766848" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-766848" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-766848"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-766848"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-766848"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-766848"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-766848"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-766848" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-766848" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-766848" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-766848"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-766848"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-766848"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-766848"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-766848"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-766848

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-766848"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-766848"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-766848"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-766848"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-766848"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-766848"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-766848"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-766848"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-766848"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-766848"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-766848"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-766848"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-766848"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-766848"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-766848"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-766848"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-766848"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-766848"

                                                
                                                
----------------------- debugLogs end: false-766848 [took: 4.981013922s] --------------------------------
helpers_test.go:175: Cleaning up "false-766848" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-766848
--- PASS: TestNetworkPlugins/group/false (5.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (22.53s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-611619 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-611619 --no-kubernetes --driver=docker  --container-runtime=crio: (19.929076707s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-611619 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-611619 status -o json: exit status 2 (391.490659ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-611619","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-611619
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-611619: (2.207267501s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (22.53s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-611619 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-611619 --no-kubernetes --driver=docker  --container-runtime=crio: (7.286998744s)
--- PASS: TestNoKubernetes/serial/Start (7.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-611619 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-611619 "sudo systemctl is-active --quiet service kubelet": exit status 1 (428.474302ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (2.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-arm64 profile list: (2.327661388s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (2.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-611619
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-611619: (1.271275035s)
--- PASS: TestNoKubernetes/serial/Stop (1.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-611619 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-611619 --driver=docker  --container-runtime=crio: (8.266052118s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-611619 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-611619 "sudo systemctl is-active --quiet service kubelet": exit status 1 (259.679827ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (165.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-529225 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
E0319 19:13:28.574363  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/addons-039972/client.crt: no such file or directory" logger="UnhandledError"
E0319 19:14:15.938583  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/functional-160492/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-529225 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m45.264171937s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (165.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (67.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-863158 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-863158 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2: (1m7.172544783s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (67.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.69s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-529225 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5a043962-91fb-4138-a923-af2a4bfce847] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5a043962-91fb-4138-a923-af2a4bfce847] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.003984026s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-529225 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.69s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-529225 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-529225 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-529225 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-529225 --alsologtostderr -v=3: (12.042433318s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-863158 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [99a2d4ad-9f21-4a3f-bf58-fbdc7a2f506c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [99a2d4ad-9f21-4a3f-bf58-fbdc7a2f506c] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003885388s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-863158 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-529225 -n old-k8s-version-529225
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-529225 -n old-k8s-version-529225: exit status 7 (73.865103ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-529225 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (143.7s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-529225 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-529225 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m23.352259858s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-529225 -n old-k8s-version-529225
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (143.70s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.57s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-863158 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-863158 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.365187527s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-863158 describe deploy/metrics-server -n kube-system
E0319 19:16:12.870220  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/functional-160492/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.57s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-863158 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-863158 --alsologtostderr -v=3: (12.153896506s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-863158 -n no-preload-863158
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-863158 -n no-preload-863158: exit status 7 (148.911748ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-863158 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (270.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-863158 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-863158 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2: (4m29.724952813s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-863158 -n no-preload-863158
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (270.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-w5spl" [095c4b43-b0b6-49d7-9489-03486b6f4eae] Running
E0319 19:18:28.574308  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/addons-039972/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003426916s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-w5spl" [095c4b43-b0b6-49d7-9489-03486b6f4eae] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004647359s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-529225 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-529225 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250214-acbabc1a
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-529225 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-529225 -n old-k8s-version-529225
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-529225 -n old-k8s-version-529225: exit status 2 (374.661894ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-529225 -n old-k8s-version-529225
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-529225 -n old-k8s-version-529225: exit status 2 (341.458652ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-529225 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-529225 -n old-k8s-version-529225
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-529225 -n old-k8s-version-529225
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (52.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-584735 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-584735 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2: (52.93637116s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (52.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-584735 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [6349e446-e229-49fe-a6b7-1f2d505f29f8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [6349e446-e229-49fe-a6b7-1f2d505f29f8] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003061595s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-584735 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-584735 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-584735 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-584735 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-584735 --alsologtostderr -v=3: (11.940016438s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-584735 -n embed-certs-584735
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-584735 -n embed-certs-584735: exit status 7 (81.355366ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-584735 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (277.49s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-584735 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2
E0319 19:20:40.798300  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/old-k8s-version-529225/client.crt: no such file or directory" logger="UnhandledError"
E0319 19:20:40.804657  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/old-k8s-version-529225/client.crt: no such file or directory" logger="UnhandledError"
E0319 19:20:40.816171  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/old-k8s-version-529225/client.crt: no such file or directory" logger="UnhandledError"
E0319 19:20:40.837595  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/old-k8s-version-529225/client.crt: no such file or directory" logger="UnhandledError"
E0319 19:20:40.879002  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/old-k8s-version-529225/client.crt: no such file or directory" logger="UnhandledError"
E0319 19:20:40.960497  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/old-k8s-version-529225/client.crt: no such file or directory" logger="UnhandledError"
E0319 19:20:41.122051  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/old-k8s-version-529225/client.crt: no such file or directory" logger="UnhandledError"
E0319 19:20:41.443641  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/old-k8s-version-529225/client.crt: no such file or directory" logger="UnhandledError"
E0319 19:20:42.085896  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/old-k8s-version-529225/client.crt: no such file or directory" logger="UnhandledError"
E0319 19:20:43.368349  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/old-k8s-version-529225/client.crt: no such file or directory" logger="UnhandledError"
E0319 19:20:45.930514  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/old-k8s-version-529225/client.crt: no such file or directory" logger="UnhandledError"
E0319 19:20:51.052330  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/old-k8s-version-529225/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-584735 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2: (4m37.121610955s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-584735 -n embed-certs-584735
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (277.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-zgbhc" [e34d9dc3-3009-47bd-896e-8127c9f44bbf] Running
E0319 19:21:01.294269  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/old-k8s-version-529225/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004227503s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-zgbhc" [e34d9dc3-3009-47bd-896e-8127c9f44bbf] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003131718s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-863158 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-863158 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250214-acbabc1a
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-863158 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-863158 -n no-preload-863158
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-863158 -n no-preload-863158: exit status 2 (340.46483ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-863158 -n no-preload-863158
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-863158 -n no-preload-863158: exit status 2 (336.423629ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-863158 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-863158 -n no-preload-863158
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-863158 -n no-preload-863158
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (50.5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-303589 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2
E0319 19:21:21.776457  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/old-k8s-version-529225/client.crt: no such file or directory" logger="UnhandledError"
E0319 19:22:02.739021  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/old-k8s-version-529225/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-303589 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2: (50.497348294s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (50.50s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-303589 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [714a36b9-7b7b-4938-aaff-ffe96480bc53] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [714a36b9-7b7b-4938-aaff-ffe96480bc53] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.005001843s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-303589 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-303589 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-303589 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.047232378s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-303589 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-303589 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-303589 --alsologtostderr -v=3: (11.966226887s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-303589 -n default-k8s-diff-port-303589
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-303589 -n default-k8s-diff-port-303589: exit status 7 (78.032129ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-303589 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (301.84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-303589 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2
E0319 19:23:11.648442  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/addons-039972/client.crt: no such file or directory" logger="UnhandledError"
E0319 19:23:24.660431  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/old-k8s-version-529225/client.crt: no such file or directory" logger="UnhandledError"
E0319 19:23:28.574690  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/addons-039972/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-303589 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2: (5m1.373130716s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-303589 -n default-k8s-diff-port-303589
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (301.84s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-2t6lt" [c4496a11-fb03-437c-8943-4873986e542f] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003449874s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-2t6lt" [c4496a11-fb03-437c-8943-4873986e542f] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003398642s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-584735 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-584735 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250214-acbabc1a
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-584735 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-584735 -n embed-certs-584735
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-584735 -n embed-certs-584735: exit status 2 (337.062805ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-584735 -n embed-certs-584735
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-584735 -n embed-certs-584735: exit status 2 (340.230045ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-584735 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-584735 -n embed-certs-584735
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-584735 -n embed-certs-584735
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (33.53s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-935881 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-935881 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2: (33.532186922s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (33.53s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-935881 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-935881 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.318549786s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-935881 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-935881 --alsologtostderr -v=3: (1.312317409s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-935881 -n newest-cni-935881
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-935881 -n newest-cni-935881: exit status 7 (75.749208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-935881 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (16.93s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-935881 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2
E0319 19:25:40.798833  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/old-k8s-version-529225/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-935881 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2: (16.426882126s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-935881 -n newest-cni-935881
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (16.93s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.9s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-935881 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-935881 -n newest-cni-935881
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-935881 -n newest-cni-935881: exit status 2 (318.659569ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-935881 -n newest-cni-935881
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-935881 -n newest-cni-935881: exit status 2 (321.343901ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-935881 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-935881 -n newest-cni-935881
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-935881 -n newest-cni-935881
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (48.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-766848 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E0319 19:26:02.272577  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/no-preload-863158/client.crt: no such file or directory" logger="UnhandledError"
E0319 19:26:02.279022  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/no-preload-863158/client.crt: no such file or directory" logger="UnhandledError"
E0319 19:26:02.290445  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/no-preload-863158/client.crt: no such file or directory" logger="UnhandledError"
E0319 19:26:02.311868  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/no-preload-863158/client.crt: no such file or directory" logger="UnhandledError"
E0319 19:26:02.353363  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/no-preload-863158/client.crt: no such file or directory" logger="UnhandledError"
E0319 19:26:02.434975  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/no-preload-863158/client.crt: no such file or directory" logger="UnhandledError"
E0319 19:26:02.596443  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/no-preload-863158/client.crt: no such file or directory" logger="UnhandledError"
E0319 19:26:02.918084  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/no-preload-863158/client.crt: no such file or directory" logger="UnhandledError"
E0319 19:26:03.560324  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/no-preload-863158/client.crt: no such file or directory" logger="UnhandledError"
E0319 19:26:04.841906  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/no-preload-863158/client.crt: no such file or directory" logger="UnhandledError"
E0319 19:26:07.403828  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/no-preload-863158/client.crt: no such file or directory" logger="UnhandledError"
E0319 19:26:08.502407  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/old-k8s-version-529225/client.crt: no such file or directory" logger="UnhandledError"
E0319 19:26:12.525137  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/no-preload-863158/client.crt: no such file or directory" logger="UnhandledError"
E0319 19:26:12.869920  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/functional-160492/client.crt: no such file or directory" logger="UnhandledError"
E0319 19:26:22.767000  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/no-preload-863158/client.crt: no such file or directory" logger="UnhandledError"
E0319 19:26:43.248771  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/no-preload-863158/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-766848 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (48.958405749s)
--- PASS: TestNetworkPlugins/group/auto/Start (48.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-766848 "pgrep -a kubelet"
I0319 19:26:47.189221  453411 config.go:182] Loaded profile config "auto-766848": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-766848 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-zdn6v" [68fac37d-595f-4cac-b37b-53e7a5a88834] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-zdn6v" [68fac37d-595f-4cac-b37b-53e7a5a88834] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.003784609s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-766848 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-766848 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-766848 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (56.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-766848 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E0319 19:27:24.210175  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/no-preload-863158/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-766848 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (56.738348725s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (56.74s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-gzbvp" [feed1626-12bf-4d2d-9069-591b50fad317] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004028208s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-gzbvp" [feed1626-12bf-4d2d-9069-591b50fad317] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004539877s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-303589 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-303589 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250214-acbabc1a
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.7s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-303589 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-303589 --alsologtostderr -v=1: (1.222437774s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-303589 -n default-k8s-diff-port-303589
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-303589 -n default-k8s-diff-port-303589: exit status 2 (455.608163ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-303589 -n default-k8s-diff-port-303589
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-303589 -n default-k8s-diff-port-303589: exit status 2 (478.764751ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-303589 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 unpause -p default-k8s-diff-port-303589 --alsologtostderr -v=1: (1.19745983s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-303589 -n default-k8s-diff-port-303589
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-303589 -n default-k8s-diff-port-303589
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.70s)
E0319 19:32:25.568905  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/default-k8s-diff-port-303589/client.crt: no such file or directory" logger="UnhandledError"
E0319 19:32:28.428743  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/auto-766848/client.crt: no such file or directory" logger="UnhandledError"
E0319 19:32:46.050848  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/default-k8s-diff-port-303589/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (68.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-766848 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-766848 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m8.717892751s)
--- PASS: TestNetworkPlugins/group/calico/Start (68.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-7ppgw" [4b8ff77c-a8c7-4619-92a7-27501563557d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003105801s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-766848 "pgrep -a kubelet"
I0319 19:28:22.332323  453411 config.go:182] Loaded profile config "kindnet-766848": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (14.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-766848 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-pj5lf" [1adc9371-2548-484e-906e-db98477607a1] Pending
E0319 19:28:28.574360  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/addons-039972/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-pj5lf" [1adc9371-2548-484e-906e-db98477607a1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 14.003742058s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (14.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-766848 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-766848 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-766848 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-kkmdq" [53db7e4b-e81d-4394-97f4-29a981a9a26e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.007696275s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (57.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-766848 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-766848 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (57.335955143s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (57.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-766848 "pgrep -a kubelet"
I0319 19:29:05.308091  453411 config.go:182] Loaded profile config "calico-766848": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-766848 replace --force -f testdata/netcat-deployment.yaml
I0319 19:29:05.698054  453411 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-6h2mr" [7b4e9d5c-235f-4ed8-b18d-2bb79ef82964] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-6h2mr" [7b4e9d5c-235f-4ed8-b18d-2bb79ef82964] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.004023532s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-766848 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-766848 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-766848 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (81.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-766848 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-766848 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m21.483899425s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (81.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-766848 "pgrep -a kubelet"
I0319 19:29:58.163556  453411 config.go:182] Loaded profile config "custom-flannel-766848": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-766848 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-dpq79" [e2ba0f03-99bf-4545-919a-f410a5a04873] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-dpq79" [e2ba0f03-99bf-4545-919a-f410a5a04873] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.004238538s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-766848 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-766848 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-766848 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (59.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-766848 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E0319 19:30:40.799563  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/old-k8s-version-529225/client.crt: no such file or directory" logger="UnhandledError"
E0319 19:30:55.940662  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/functional-160492/client.crt: no such file or directory" logger="UnhandledError"
E0319 19:31:02.272530  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/no-preload-863158/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-766848 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (59.304578198s)
--- PASS: TestNetworkPlugins/group/flannel/Start (59.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-766848 "pgrep -a kubelet"
I0319 19:31:06.508947  453411 config.go:182] Loaded profile config "enable-default-cni-766848": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-766848 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-4hm9r" [dc153072-0bbd-406f-92e9-43a3c475b748] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0319 19:31:12.869861  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/functional-160492/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-4hm9r" [dc153072-0bbd-406f-92e9-43a3c475b748] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.003901464s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-766848 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-766848 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-766848 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-r8qzr" [cb7104c0-0547-4e18-8ec7-7f31d5ed849d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003555601s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (74.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-766848 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-766848 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m14.052921151s)
--- PASS: TestNetworkPlugins/group/bridge/Start (74.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-766848 "pgrep -a kubelet"
I0319 19:31:40.127497  453411 config.go:182] Loaded profile config "flannel-766848": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (13.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-766848 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-9w6l2" [da9d59cb-39d4-4c94-bf5a-6fc43023bda1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0319 19:31:47.453363  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/auto-766848/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-9w6l2" [da9d59cb-39d4-4c94-bf5a-6fc43023bda1] Running
E0319 19:31:47.459908  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/auto-766848/client.crt: no such file or directory" logger="UnhandledError"
E0319 19:31:47.471569  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/auto-766848/client.crt: no such file or directory" logger="UnhandledError"
E0319 19:31:47.493017  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/auto-766848/client.crt: no such file or directory" logger="UnhandledError"
E0319 19:31:47.534381  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/auto-766848/client.crt: no such file or directory" logger="UnhandledError"
E0319 19:31:47.615762  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/auto-766848/client.crt: no such file or directory" logger="UnhandledError"
E0319 19:31:47.777211  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/auto-766848/client.crt: no such file or directory" logger="UnhandledError"
E0319 19:31:48.098888  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/auto-766848/client.crt: no such file or directory" logger="UnhandledError"
E0319 19:31:48.740389  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/auto-766848/client.crt: no such file or directory" logger="UnhandledError"
E0319 19:31:50.022590  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/auto-766848/client.crt: no such file or directory" logger="UnhandledError"
E0319 19:31:52.584578  453411 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/auto-766848/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 13.003823157s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (13.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-766848 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-766848 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-766848 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-766848 "pgrep -a kubelet"
I0319 19:32:52.847401  453411 config.go:182] Loaded profile config "bridge-766848": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-766848 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-b4hp5" [0286168c-992f-446e-98c2-69c3d662c99b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-b4hp5" [0286168c-992f-446e-98c2-69c3d662c99b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003434364s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-766848 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-766848 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-766848 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    

Test skip (32/331)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.32.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.57s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-027384 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-027384" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-027384
--- SKIP: TestDownloadOnlyKic (0.57s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.33s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:789: skipping: crio not supported
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-039972 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.33s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1804: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:480: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:567: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:84: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-089369" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-089369
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:631: 
----------------------- debugLogs start: kubenet-766848 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-766848

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-766848

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-766848

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-766848

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-766848

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-766848

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-766848

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-766848

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-766848

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-766848

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-766848"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-766848"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-766848"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-766848

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-766848"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-766848"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-766848" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-766848" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-766848" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-766848" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-766848" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-766848" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-766848" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-766848" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-766848"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-766848"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-766848"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-766848"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-766848"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-766848" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-766848" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-766848" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-766848"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-766848"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-766848"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-766848"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-766848"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-766848

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-766848"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-766848"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-766848"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-766848"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-766848"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-766848"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-766848"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-766848"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-766848"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-766848"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-766848"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-766848"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-766848"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-766848"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-766848"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-766848"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-766848"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-766848"

                                                
                                                
----------------------- debugLogs end: kubenet-766848 [took: 5.076881934s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-766848" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-766848
--- SKIP: TestNetworkPlugins/group/kubenet (5.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:631: 
----------------------- debugLogs start: cilium-766848 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-766848

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-766848

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-766848

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-766848

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-766848

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-766848

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-766848

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-766848

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-766848

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-766848

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-766848"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-766848"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-766848"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-766848

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-766848"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-766848"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-766848" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-766848" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-766848" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-766848" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-766848" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-766848" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-766848" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-766848" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-766848"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-766848"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-766848"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-766848"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-766848"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-766848

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-766848

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-766848" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-766848" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-766848

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-766848

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-766848" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-766848" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-766848" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-766848" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-766848" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-766848"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-766848"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-766848"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-766848"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-766848"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20544-448023/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 19 Mar 2025 19:10:57 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.76.2:8443
name: NoKubernetes-611619
contexts:
- context:
cluster: NoKubernetes-611619
extensions:
- extension:
last-update: Wed, 19 Mar 2025 19:10:57 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: NoKubernetes-611619
name: NoKubernetes-611619
current-context: NoKubernetes-611619
kind: Config
preferences: {}
users:
- name: NoKubernetes-611619
user:
client-certificate: /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/NoKubernetes-611619/client.crt
client-key: /home/jenkins/minikube-integration/20544-448023/.minikube/profiles/NoKubernetes-611619/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-766848

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-766848"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-766848"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-766848"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-766848"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-766848"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-766848"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-766848"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-766848"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-766848"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-766848"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-766848"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-766848"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-766848"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-766848"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-766848"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-766848"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-766848"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-766848" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-766848"

                                                
                                                
----------------------- debugLogs end: cilium-766848 [took: 5.798824036s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-766848" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-766848
--- SKIP: TestNetworkPlugins/group/cilium (6.15s)

                                                
                                    
Copied to clipboard