Test Report: Docker_Linux_crio_arm64 20315

                    
                      b15a094293fe6765e372e2dddd744fc5f5e61b59:2025-02-14:38357
                    
                

Test fail (3/331)

Order failed test Duration
36 TestAddons/parallel/Ingress 154.2
99 TestFunctional/parallel/PersistentVolumeClaim 202.62
249 TestScheduledStopUnix 36.84
x
+
TestAddons/parallel/Ingress (154.2s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-794492 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-794492 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-794492 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [d1f1ca14-3313-4bb4-88f0-3ad097b1ae16] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [d1f1ca14-3313-4bb4-88f0-3ad097b1ae16] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003133836s
I0214 21:19:46.764523  278186 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-794492 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-794492 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.686117571s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-794492 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-794492 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-794492
helpers_test.go:235: (dbg) docker inspect addons-794492:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f9612972d46dc54b237f65abdf55090b22ea49e010797fbafaec1f6b94b13546",
	        "Created": "2025-02-14T21:15:00.950557403Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 279452,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-02-14T21:15:01.11503544Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:86f383d95829214691bb905fe90945d8bf2efbbe5a717e0830a616744d143ec9",
	        "ResolvConfPath": "/var/lib/docker/containers/f9612972d46dc54b237f65abdf55090b22ea49e010797fbafaec1f6b94b13546/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f9612972d46dc54b237f65abdf55090b22ea49e010797fbafaec1f6b94b13546/hostname",
	        "HostsPath": "/var/lib/docker/containers/f9612972d46dc54b237f65abdf55090b22ea49e010797fbafaec1f6b94b13546/hosts",
	        "LogPath": "/var/lib/docker/containers/f9612972d46dc54b237f65abdf55090b22ea49e010797fbafaec1f6b94b13546/f9612972d46dc54b237f65abdf55090b22ea49e010797fbafaec1f6b94b13546-json.log",
	        "Name": "/addons-794492",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-794492:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-794492",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/5bb4fbe41f79de6ce8aaefa1a4517fde1fabc1a8fdede0300e6703bdf99d1ba8-init/diff:/var/lib/docker/overlay2/98047733aa5d86fafdd36d9f264e1aa5c3c6b5243d320c9d2e042ec72038fd21/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5bb4fbe41f79de6ce8aaefa1a4517fde1fabc1a8fdede0300e6703bdf99d1ba8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5bb4fbe41f79de6ce8aaefa1a4517fde1fabc1a8fdede0300e6703bdf99d1ba8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5bb4fbe41f79de6ce8aaefa1a4517fde1fabc1a8fdede0300e6703bdf99d1ba8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-794492",
	                "Source": "/var/lib/docker/volumes/addons-794492/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-794492",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-794492",
	                "name.minikube.sigs.k8s.io": "addons-794492",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6582f9531c7d5cac8278f51d0f7ba7422c13472832c82f770b69a6786f8c7918",
	            "SandboxKey": "/var/run/docker/netns/6582f9531c7d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-794492": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "a91eb355e09a856da1406bd0a26dd0681be15260006f05df43f219dfe211dcc6",
	                    "EndpointID": "85e83c3c5f30994380bf68b6ffc46708e2c63310d88adcf51be7426aca3f9959",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-794492",
	                        "f9612972d46d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-794492 -n addons-794492
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-794492 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-794492 logs -n 25: (1.836027514s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-596516                                                                     | download-only-596516   | jenkins | v1.35.0 | 14 Feb 25 21:14 UTC | 14 Feb 25 21:14 UTC |
	| start   | --download-only -p                                                                          | download-docker-233506 | jenkins | v1.35.0 | 14 Feb 25 21:14 UTC |                     |
	|         | download-docker-233506                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-233506                                                                   | download-docker-233506 | jenkins | v1.35.0 | 14 Feb 25 21:14 UTC | 14 Feb 25 21:14 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-741244   | jenkins | v1.35.0 | 14 Feb 25 21:14 UTC |                     |
	|         | binary-mirror-741244                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:38683                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-741244                                                                     | binary-mirror-741244   | jenkins | v1.35.0 | 14 Feb 25 21:14 UTC | 14 Feb 25 21:14 UTC |
	| addons  | enable dashboard -p                                                                         | addons-794492          | jenkins | v1.35.0 | 14 Feb 25 21:14 UTC |                     |
	|         | addons-794492                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-794492          | jenkins | v1.35.0 | 14 Feb 25 21:14 UTC |                     |
	|         | addons-794492                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-794492 --wait=true                                                                | addons-794492          | jenkins | v1.35.0 | 14 Feb 25 21:14 UTC | 14 Feb 25 21:17 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	| addons  | addons-794492 addons disable                                                                | addons-794492          | jenkins | v1.35.0 | 14 Feb 25 21:17 UTC | 14 Feb 25 21:17 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-794492 addons disable                                                                | addons-794492          | jenkins | v1.35.0 | 14 Feb 25 21:17 UTC | 14 Feb 25 21:18 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-794492          | jenkins | v1.35.0 | 14 Feb 25 21:18 UTC | 14 Feb 25 21:18 UTC |
	|         | -p addons-794492                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-794492 addons disable                                                                | addons-794492          | jenkins | v1.35.0 | 14 Feb 25 21:18 UTC | 14 Feb 25 21:18 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ip      | addons-794492 ip                                                                            | addons-794492          | jenkins | v1.35.0 | 14 Feb 25 21:18 UTC | 14 Feb 25 21:18 UTC |
	| addons  | addons-794492 addons disable                                                                | addons-794492          | jenkins | v1.35.0 | 14 Feb 25 21:18 UTC | 14 Feb 25 21:18 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-794492 addons disable                                                                | addons-794492          | jenkins | v1.35.0 | 14 Feb 25 21:18 UTC | 14 Feb 25 21:18 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | addons-794492 addons                                                                        | addons-794492          | jenkins | v1.35.0 | 14 Feb 25 21:18 UTC | 14 Feb 25 21:18 UTC |
	|         | disable nvidia-device-plugin                                                                |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-794492 addons                                                                        | addons-794492          | jenkins | v1.35.0 | 14 Feb 25 21:18 UTC | 14 Feb 25 21:18 UTC |
	|         | disable cloud-spanner                                                                       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-794492 ssh cat                                                                       | addons-794492          | jenkins | v1.35.0 | 14 Feb 25 21:18 UTC | 14 Feb 25 21:18 UTC |
	|         | /opt/local-path-provisioner/pvc-ef979e62-d684-495c-80b9-afcdaf8e6967_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-794492 addons disable                                                                | addons-794492          | jenkins | v1.35.0 | 14 Feb 25 21:18 UTC | 14 Feb 25 21:19 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-794492 addons                                                                        | addons-794492          | jenkins | v1.35.0 | 14 Feb 25 21:18 UTC | 14 Feb 25 21:18 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-794492 addons                                                                        | addons-794492          | jenkins | v1.35.0 | 14 Feb 25 21:19 UTC | 14 Feb 25 21:19 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-794492 addons                                                                        | addons-794492          | jenkins | v1.35.0 | 14 Feb 25 21:19 UTC | 14 Feb 25 21:19 UTC |
	|         | disable inspektor-gadget                                                                    |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-794492 addons                                                                        | addons-794492          | jenkins | v1.35.0 | 14 Feb 25 21:19 UTC | 14 Feb 25 21:19 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-794492 ssh curl -s                                                                   | addons-794492          | jenkins | v1.35.0 | 14 Feb 25 21:19 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-794492 ip                                                                            | addons-794492          | jenkins | v1.35.0 | 14 Feb 25 21:21 UTC | 14 Feb 25 21:21 UTC |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/14 21:14:36
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0214 21:14:36.184997  278950 out.go:345] Setting OutFile to fd 1 ...
	I0214 21:14:36.185216  278950 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0214 21:14:36.185245  278950 out.go:358] Setting ErrFile to fd 2...
	I0214 21:14:36.185264  278950 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0214 21:14:36.185533  278950 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20315-272800/.minikube/bin
	I0214 21:14:36.186050  278950 out.go:352] Setting JSON to false
	I0214 21:14:36.186947  278950 start.go:130] hostinfo: {"hostname":"ip-172-31-21-244","uptime":7023,"bootTime":1739560653,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1077-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0214 21:14:36.187077  278950 start.go:140] virtualization:  
	I0214 21:14:36.190661  278950 out.go:177] * [addons-794492] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0214 21:14:36.193572  278950 out.go:177]   - MINIKUBE_LOCATION=20315
	I0214 21:14:36.193688  278950 notify.go:220] Checking for updates...
	I0214 21:14:36.199438  278950 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0214 21:14:36.202347  278950 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20315-272800/kubeconfig
	I0214 21:14:36.205235  278950 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20315-272800/.minikube
	I0214 21:14:36.208140  278950 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0214 21:14:36.211116  278950 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0214 21:14:36.214484  278950 driver.go:394] Setting default libvirt URI to qemu:///system
	I0214 21:14:36.239157  278950 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0214 21:14:36.239277  278950 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0214 21:14:36.300515  278950 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:46 SystemTime:2025-02-14 21:14:36.291479848 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0214 21:14:36.300627  278950 docker.go:318] overlay module found
	I0214 21:14:36.303793  278950 out.go:177] * Using the docker driver based on user configuration
	I0214 21:14:36.306665  278950 start.go:304] selected driver: docker
	I0214 21:14:36.306688  278950 start.go:908] validating driver "docker" against <nil>
	I0214 21:14:36.306704  278950 start.go:919] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0214 21:14:36.307462  278950 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0214 21:14:36.369134  278950 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:46 SystemTime:2025-02-14 21:14:36.35962279 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0214 21:14:36.369380  278950 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0214 21:14:36.369600  278950 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0214 21:14:36.372486  278950 out.go:177] * Using Docker driver with root privileges
	I0214 21:14:36.375360  278950 cni.go:84] Creating CNI manager for ""
	I0214 21:14:36.375427  278950 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0214 21:14:36.375446  278950 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0214 21:14:36.375534  278950 start.go:347] cluster config:
	{Name:addons-794492 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-794492 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0214 21:14:36.378644  278950 out.go:177] * Starting "addons-794492" primary control-plane node in "addons-794492" cluster
	I0214 21:14:36.381661  278950 cache.go:121] Beginning downloading kic base image for docker with crio
	I0214 21:14:36.385269  278950 out.go:177] * Pulling base image v0.0.46-1739182054-20387 ...
	I0214 21:14:36.387978  278950 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0214 21:14:36.388042  278950 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20315-272800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-arm64.tar.lz4
	I0214 21:14:36.388054  278950 cache.go:56] Caching tarball of preloaded images
	I0214 21:14:36.388055  278950 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad in local docker daemon
	I0214 21:14:36.388137  278950 preload.go:172] Found /home/jenkins/minikube-integration/20315-272800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0214 21:14:36.388148  278950 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0214 21:14:36.388493  278950 profile.go:143] Saving config to /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/addons-794492/config.json ...
	I0214 21:14:36.388532  278950 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/addons-794492/config.json: {Name:mk31788c8b7bc865b9f9e30a36f66a9ee753e2bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 21:14:36.403930  278950 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad to local cache
	I0214 21:14:36.404067  278950 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad in local cache directory
	I0214 21:14:36.404093  278950 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad in local cache directory, skipping pull
	I0214 21:14:36.404102  278950 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad exists in cache, skipping pull
	I0214 21:14:36.404109  278950 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad as a tarball
	I0214 21:14:36.404125  278950 cache.go:163] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad from local cache
	I0214 21:14:53.832425  278950 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad from cached tarball
	I0214 21:14:53.832485  278950 cache.go:230] Successfully downloaded all kic artifacts
	I0214 21:14:53.832531  278950 start.go:360] acquireMachinesLock for addons-794492: {Name:mka407b226cacc4b00e2b0ec7d1e60c2354cd968 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0214 21:14:53.832653  278950 start.go:364] duration metric: took 96.612µs to acquireMachinesLock for "addons-794492"
	I0214 21:14:53.832705  278950 start.go:93] Provisioning new machine with config: &{Name:addons-794492 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-794492 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0214 21:14:53.832786  278950 start.go:125] createHost starting for "" (driver="docker")
	I0214 21:14:53.836201  278950 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0214 21:14:53.836452  278950 start.go:159] libmachine.API.Create for "addons-794492" (driver="docker")
	I0214 21:14:53.836558  278950 client.go:168] LocalClient.Create starting
	I0214 21:14:53.836687  278950 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20315-272800/.minikube/certs/ca.pem
	I0214 21:14:54.192061  278950 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20315-272800/.minikube/certs/cert.pem
	I0214 21:14:54.478011  278950 cli_runner.go:164] Run: docker network inspect addons-794492 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0214 21:14:54.499432  278950 cli_runner.go:211] docker network inspect addons-794492 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0214 21:14:54.499516  278950 network_create.go:284] running [docker network inspect addons-794492] to gather additional debugging logs...
	I0214 21:14:54.499538  278950 cli_runner.go:164] Run: docker network inspect addons-794492
	W0214 21:14:54.516535  278950 cli_runner.go:211] docker network inspect addons-794492 returned with exit code 1
	I0214 21:14:54.516571  278950 network_create.go:287] error running [docker network inspect addons-794492]: docker network inspect addons-794492: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-794492 not found
	I0214 21:14:54.516585  278950 network_create.go:289] output of [docker network inspect addons-794492]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-794492 not found
	
	** /stderr **
	I0214 21:14:54.516694  278950 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0214 21:14:54.533685  278950 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001b7ff00}
	I0214 21:14:54.533730  278950 network_create.go:124] attempt to create docker network addons-794492 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0214 21:14:54.533791  278950 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-794492 addons-794492
	I0214 21:14:54.603800  278950 network_create.go:108] docker network addons-794492 192.168.49.0/24 created
	I0214 21:14:54.603834  278950 kic.go:121] calculated static IP "192.168.49.2" for the "addons-794492" container
	I0214 21:14:54.603904  278950 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0214 21:14:54.619324  278950 cli_runner.go:164] Run: docker volume create addons-794492 --label name.minikube.sigs.k8s.io=addons-794492 --label created_by.minikube.sigs.k8s.io=true
	I0214 21:14:54.637705  278950 oci.go:103] Successfully created a docker volume addons-794492
	I0214 21:14:54.637809  278950 cli_runner.go:164] Run: docker run --rm --name addons-794492-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-794492 --entrypoint /usr/bin/test -v addons-794492:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad -d /var/lib
	I0214 21:14:56.660321  278950 cli_runner.go:217] Completed: docker run --rm --name addons-794492-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-794492 --entrypoint /usr/bin/test -v addons-794492:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad -d /var/lib: (2.022459826s)
	I0214 21:14:56.660353  278950 oci.go:107] Successfully prepared a docker volume addons-794492
	I0214 21:14:56.660396  278950 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0214 21:14:56.660418  278950 kic.go:194] Starting extracting preloaded images to volume ...
	I0214 21:14:56.660491  278950 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20315-272800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-794492:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad -I lz4 -xf /preloaded.tar -C /extractDir
	I0214 21:15:00.876688  278950 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20315-272800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-794492:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad -I lz4 -xf /preloaded.tar -C /extractDir: (4.216147695s)
	I0214 21:15:00.876724  278950 kic.go:203] duration metric: took 4.216301234s to extract preloaded images to volume ...
	W0214 21:15:00.876884  278950 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0214 21:15:00.877011  278950 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0214 21:15:00.934286  278950 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-794492 --name addons-794492 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-794492 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-794492 --network addons-794492 --ip 192.168.49.2 --volume addons-794492:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad
	I0214 21:15:01.297902  278950 cli_runner.go:164] Run: docker container inspect addons-794492 --format={{.State.Running}}
	I0214 21:15:01.327224  278950 cli_runner.go:164] Run: docker container inspect addons-794492 --format={{.State.Status}}
	I0214 21:15:01.354491  278950 cli_runner.go:164] Run: docker exec addons-794492 stat /var/lib/dpkg/alternatives/iptables
	I0214 21:15:01.427998  278950 oci.go:144] the created container "addons-794492" has a running status.
	I0214 21:15:01.428035  278950 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20315-272800/.minikube/machines/addons-794492/id_rsa...
	I0214 21:15:02.180097  278950 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20315-272800/.minikube/machines/addons-794492/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0214 21:15:02.209795  278950 cli_runner.go:164] Run: docker container inspect addons-794492 --format={{.State.Status}}
	I0214 21:15:02.236893  278950 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0214 21:15:02.236924  278950 kic_runner.go:114] Args: [docker exec --privileged addons-794492 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0214 21:15:02.293595  278950 cli_runner.go:164] Run: docker container inspect addons-794492 --format={{.State.Status}}
	I0214 21:15:02.341948  278950 machine.go:93] provisionDockerMachine start ...
	I0214 21:15:02.342045  278950 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-794492
	I0214 21:15:02.372024  278950 main.go:141] libmachine: Using SSH client type: native
	I0214 21:15:02.372548  278950 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x414ca0] 0x4174e0 <nil>  [] 0s} 127.0.0.1 33136 <nil> <nil>}
	I0214 21:15:02.372569  278950 main.go:141] libmachine: About to run SSH command:
	hostname
	I0214 21:15:02.506546  278950 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-794492
	
	I0214 21:15:02.506573  278950 ubuntu.go:169] provisioning hostname "addons-794492"
	I0214 21:15:02.506667  278950 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-794492
	I0214 21:15:02.527594  278950 main.go:141] libmachine: Using SSH client type: native
	I0214 21:15:02.527841  278950 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x414ca0] 0x4174e0 <nil>  [] 0s} 127.0.0.1 33136 <nil> <nil>}
	I0214 21:15:02.527861  278950 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-794492 && echo "addons-794492" | sudo tee /etc/hostname
	I0214 21:15:02.687278  278950 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-794492
	
	I0214 21:15:02.687362  278950 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-794492
	I0214 21:15:02.704243  278950 main.go:141] libmachine: Using SSH client type: native
	I0214 21:15:02.704496  278950 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x414ca0] 0x4174e0 <nil>  [] 0s} 127.0.0.1 33136 <nil> <nil>}
	I0214 21:15:02.704515  278950 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-794492' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-794492/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-794492' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0214 21:15:02.839292  278950 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0214 21:15:02.839320  278950 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20315-272800/.minikube CaCertPath:/home/jenkins/minikube-integration/20315-272800/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20315-272800/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20315-272800/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20315-272800/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20315-272800/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20315-272800/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20315-272800/.minikube}
	I0214 21:15:02.839349  278950 ubuntu.go:177] setting up certificates
	I0214 21:15:02.839359  278950 provision.go:84] configureAuth start
	I0214 21:15:02.839420  278950 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-794492
	I0214 21:15:02.857066  278950 provision.go:143] copyHostCerts
	I0214 21:15:02.857150  278950 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20315-272800/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20315-272800/.minikube/ca.pem (1082 bytes)
	I0214 21:15:02.857275  278950 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20315-272800/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20315-272800/.minikube/cert.pem (1123 bytes)
	I0214 21:15:02.857334  278950 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20315-272800/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20315-272800/.minikube/key.pem (1675 bytes)
	I0214 21:15:02.857379  278950 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20315-272800/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20315-272800/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20315-272800/.minikube/certs/ca-key.pem org=jenkins.addons-794492 san=[127.0.0.1 192.168.49.2 addons-794492 localhost minikube]
	I0214 21:15:03.151907  278950 provision.go:177] copyRemoteCerts
	I0214 21:15:03.151982  278950 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0214 21:15:03.152029  278950 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-794492
	I0214 21:15:03.168796  278950 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33136 SSHKeyPath:/home/jenkins/minikube-integration/20315-272800/.minikube/machines/addons-794492/id_rsa Username:docker}
	I0214 21:15:03.263999  278950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-272800/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0214 21:15:03.290027  278950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-272800/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0214 21:15:03.314454  278950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-272800/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0214 21:15:03.338476  278950 provision.go:87] duration metric: took 499.102761ms to configureAuth
	I0214 21:15:03.338544  278950 ubuntu.go:193] setting minikube options for container-runtime
	I0214 21:15:03.338750  278950 config.go:182] Loaded profile config "addons-794492": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0214 21:15:03.338869  278950 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-794492
	I0214 21:15:03.356475  278950 main.go:141] libmachine: Using SSH client type: native
	I0214 21:15:03.356718  278950 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x414ca0] 0x4174e0 <nil>  [] 0s} 127.0.0.1 33136 <nil> <nil>}
	I0214 21:15:03.356741  278950 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0214 21:15:03.592020  278950 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0214 21:15:03.592096  278950 machine.go:96] duration metric: took 1.250123835s to provisionDockerMachine
	I0214 21:15:03.592122  278950 client.go:171] duration metric: took 9.755550368s to LocalClient.Create
	I0214 21:15:03.592169  278950 start.go:167] duration metric: took 9.755701715s to libmachine.API.Create "addons-794492"
	I0214 21:15:03.592195  278950 start.go:293] postStartSetup for "addons-794492" (driver="docker")
	I0214 21:15:03.592232  278950 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0214 21:15:03.592347  278950 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0214 21:15:03.592448  278950 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-794492
	I0214 21:15:03.610260  278950 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33136 SSHKeyPath:/home/jenkins/minikube-integration/20315-272800/.minikube/machines/addons-794492/id_rsa Username:docker}
	I0214 21:15:03.704083  278950 ssh_runner.go:195] Run: cat /etc/os-release
	I0214 21:15:03.707114  278950 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0214 21:15:03.707156  278950 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0214 21:15:03.707167  278950 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0214 21:15:03.707174  278950 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0214 21:15:03.707190  278950 filesync.go:126] Scanning /home/jenkins/minikube-integration/20315-272800/.minikube/addons for local assets ...
	I0214 21:15:03.707263  278950 filesync.go:126] Scanning /home/jenkins/minikube-integration/20315-272800/.minikube/files for local assets ...
	I0214 21:15:03.707289  278950 start.go:296] duration metric: took 115.062953ms for postStartSetup
	I0214 21:15:03.707613  278950 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-794492
	I0214 21:15:03.723927  278950 profile.go:143] Saving config to /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/addons-794492/config.json ...
	I0214 21:15:03.724213  278950 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0214 21:15:03.724265  278950 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-794492
	I0214 21:15:03.740811  278950 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33136 SSHKeyPath:/home/jenkins/minikube-integration/20315-272800/.minikube/machines/addons-794492/id_rsa Username:docker}
	I0214 21:15:03.827716  278950 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0214 21:15:03.832190  278950 start.go:128] duration metric: took 9.99938701s to createHost
	I0214 21:15:03.832217  278950 start.go:83] releasing machines lock for "addons-794492", held for 9.999548934s
	I0214 21:15:03.832289  278950 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-794492
	I0214 21:15:03.848987  278950 ssh_runner.go:195] Run: cat /version.json
	I0214 21:15:03.849049  278950 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-794492
	I0214 21:15:03.849302  278950 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0214 21:15:03.849361  278950 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-794492
	I0214 21:15:03.870417  278950 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33136 SSHKeyPath:/home/jenkins/minikube-integration/20315-272800/.minikube/machines/addons-794492/id_rsa Username:docker}
	I0214 21:15:03.882979  278950 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33136 SSHKeyPath:/home/jenkins/minikube-integration/20315-272800/.minikube/machines/addons-794492/id_rsa Username:docker}
	I0214 21:15:03.962336  278950 ssh_runner.go:195] Run: systemctl --version
	I0214 21:15:04.101137  278950 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0214 21:15:04.241253  278950 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0214 21:15:04.245567  278950 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0214 21:15:04.266930  278950 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0214 21:15:04.267009  278950 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0214 21:15:04.300572  278950 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0214 21:15:04.300641  278950 start.go:495] detecting cgroup driver to use...
	I0214 21:15:04.300692  278950 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0214 21:15:04.300772  278950 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0214 21:15:04.316861  278950 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0214 21:15:04.328763  278950 docker.go:217] disabling cri-docker service (if available) ...
	I0214 21:15:04.328876  278950 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0214 21:15:04.343298  278950 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0214 21:15:04.358576  278950 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0214 21:15:04.445481  278950 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0214 21:15:04.545019  278950 docker.go:233] disabling docker service ...
	I0214 21:15:04.545098  278950 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0214 21:15:04.566382  278950 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0214 21:15:04.579555  278950 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0214 21:15:04.672919  278950 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0214 21:15:04.774966  278950 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0214 21:15:04.786507  278950 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0214 21:15:04.804502  278950 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0214 21:15:04.804626  278950 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 21:15:04.815481  278950 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0214 21:15:04.815570  278950 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 21:15:04.825918  278950 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 21:15:04.836483  278950 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 21:15:04.846412  278950 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0214 21:15:04.855822  278950 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 21:15:04.865650  278950 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 21:15:04.881805  278950 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 21:15:04.892186  278950 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0214 21:15:04.901268  278950 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0214 21:15:04.909992  278950 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0214 21:15:04.992313  278950 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0214 21:15:05.108066  278950 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0214 21:15:05.108169  278950 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0214 21:15:05.112358  278950 start.go:563] Will wait 60s for crictl version
	I0214 21:15:05.112434  278950 ssh_runner.go:195] Run: which crictl
	I0214 21:15:05.116284  278950 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0214 21:15:05.155182  278950 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0214 21:15:05.155303  278950 ssh_runner.go:195] Run: crio --version
	I0214 21:15:05.197731  278950 ssh_runner.go:195] Run: crio --version
	I0214 21:15:05.249402  278950 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.24.6 ...
	I0214 21:15:05.252178  278950 cli_runner.go:164] Run: docker network inspect addons-794492 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0214 21:15:05.267007  278950 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0214 21:15:05.270717  278950 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0214 21:15:05.281591  278950 kubeadm.go:875] updating cluster {Name:addons-794492 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-794492 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0214 21:15:05.281722  278950 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0214 21:15:05.281793  278950 ssh_runner.go:195] Run: sudo crictl images --output json
	I0214 21:15:05.362814  278950 crio.go:514] all images are preloaded for cri-o runtime.
	I0214 21:15:05.362841  278950 crio.go:433] Images already preloaded, skipping extraction
	I0214 21:15:05.362900  278950 ssh_runner.go:195] Run: sudo crictl images --output json
	I0214 21:15:05.400139  278950 crio.go:514] all images are preloaded for cri-o runtime.
	I0214 21:15:05.400161  278950 cache_images.go:84] Images are preloaded, skipping loading
	I0214 21:15:05.400170  278950 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.32.1 crio true true} ...
	I0214 21:15:05.400261  278950 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-794492 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:addons-794492 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0214 21:15:05.400345  278950 ssh_runner.go:195] Run: crio config
	I0214 21:15:05.450731  278950 cni.go:84] Creating CNI manager for ""
	I0214 21:15:05.450756  278950 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0214 21:15:05.450766  278950 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0214 21:15:05.450789  278950 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-794492 NodeName:addons-794492 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0214 21:15:05.450925  278950 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-794492"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0214 21:15:05.451002  278950 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0214 21:15:05.459972  278950 binaries.go:44] Found k8s binaries, skipping transfer
	I0214 21:15:05.460041  278950 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0214 21:15:05.468898  278950 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0214 21:15:05.487077  278950 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0214 21:15:05.505086  278950 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2287 bytes)
	I0214 21:15:05.522687  278950 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0214 21:15:05.526343  278950 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0214 21:15:05.537451  278950 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0214 21:15:05.616316  278950 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0214 21:15:05.630284  278950 certs.go:68] Setting up /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/addons-794492 for IP: 192.168.49.2
	I0214 21:15:05.630309  278950 certs.go:194] generating shared ca certs ...
	I0214 21:15:05.630327  278950 certs.go:226] acquiring lock for ca certs: {Name:mk331a8d0ee567d6460e2465c9b7c32324663cbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 21:15:05.631448  278950 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20315-272800/.minikube/ca.key
	I0214 21:15:06.055444  278950 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20315-272800/.minikube/ca.crt ...
	I0214 21:15:06.055480  278950 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-272800/.minikube/ca.crt: {Name:mk64e98e4e439eec9d826d9f700986bbec2e0738 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 21:15:06.056270  278950 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20315-272800/.minikube/ca.key ...
	I0214 21:15:06.056291  278950 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-272800/.minikube/ca.key: {Name:mk9a5a62ad5f7ac13c124db45eed86bcc9277433 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 21:15:06.057005  278950 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20315-272800/.minikube/proxy-client-ca.key
	I0214 21:15:07.223551  278950 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20315-272800/.minikube/proxy-client-ca.crt ...
	I0214 21:15:07.223582  278950 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-272800/.minikube/proxy-client-ca.crt: {Name:mkc166bf493ff48d9564c9de4439c34860187a55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 21:15:07.223760  278950 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20315-272800/.minikube/proxy-client-ca.key ...
	I0214 21:15:07.223768  278950 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-272800/.minikube/proxy-client-ca.key: {Name:mkb0539777704ef3e7f1820158e4ac7c40cdf1c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 21:15:07.223831  278950 certs.go:256] generating profile certs ...
	I0214 21:15:07.223890  278950 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/addons-794492/client.key
	I0214 21:15:07.223901  278950 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/addons-794492/client.crt with IP's: []
	I0214 21:15:07.422439  278950 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/addons-794492/client.crt ...
	I0214 21:15:07.422472  278950 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/addons-794492/client.crt: {Name:mk53c85180f6548b41b5857f4edbc9beb0095dfc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 21:15:07.422670  278950 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/addons-794492/client.key ...
	I0214 21:15:07.422685  278950 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/addons-794492/client.key: {Name:mk876a334265c4306fc7ad0ad2ef814ee1229302 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 21:15:07.422769  278950 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/addons-794492/apiserver.key.e0e8e4de
	I0214 21:15:07.422790  278950 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/addons-794492/apiserver.crt.e0e8e4de with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0214 21:15:08.217394  278950 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/addons-794492/apiserver.crt.e0e8e4de ...
	I0214 21:15:08.217427  278950 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/addons-794492/apiserver.crt.e0e8e4de: {Name:mkd2f26b5318ab0b762a949f33811171971e3f35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 21:15:08.217636  278950 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/addons-794492/apiserver.key.e0e8e4de ...
	I0214 21:15:08.217652  278950 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/addons-794492/apiserver.key.e0e8e4de: {Name:mkcf91597e9f94b55f6c47c805f9270870c1858a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 21:15:08.217745  278950 certs.go:381] copying /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/addons-794492/apiserver.crt.e0e8e4de -> /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/addons-794492/apiserver.crt
	I0214 21:15:08.217825  278950 certs.go:385] copying /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/addons-794492/apiserver.key.e0e8e4de -> /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/addons-794492/apiserver.key
	I0214 21:15:08.217882  278950 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/addons-794492/proxy-client.key
	I0214 21:15:08.217902  278950 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/addons-794492/proxy-client.crt with IP's: []
	I0214 21:15:08.693369  278950 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/addons-794492/proxy-client.crt ...
	I0214 21:15:08.693400  278950 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/addons-794492/proxy-client.crt: {Name:mk58fb1acf6ac4c06b9ac80ff560c50cc5732322 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 21:15:08.693592  278950 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/addons-794492/proxy-client.key ...
	I0214 21:15:08.693606  278950 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/addons-794492/proxy-client.key: {Name:mka59495a6c48c7ca155ea6ce2925102e104f49c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 21:15:08.693804  278950 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-272800/.minikube/certs/ca-key.pem (1679 bytes)
	I0214 21:15:08.693848  278950 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-272800/.minikube/certs/ca.pem (1082 bytes)
	I0214 21:15:08.693881  278950 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-272800/.minikube/certs/cert.pem (1123 bytes)
	I0214 21:15:08.693910  278950 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-272800/.minikube/certs/key.pem (1675 bytes)
	I0214 21:15:08.694547  278950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-272800/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0214 21:15:08.719292  278950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-272800/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0214 21:15:08.746189  278950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-272800/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0214 21:15:08.783196  278950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-272800/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0214 21:15:08.810533  278950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/addons-794492/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0214 21:15:08.834651  278950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/addons-794492/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0214 21:15:08.858129  278950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/addons-794492/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0214 21:15:08.881857  278950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/addons-794492/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0214 21:15:08.906234  278950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-272800/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0214 21:15:08.930919  278950 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0214 21:15:08.949481  278950 ssh_runner.go:195] Run: openssl version
	I0214 21:15:08.954857  278950 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0214 21:15:08.964376  278950 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0214 21:15:08.967902  278950 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 14 21:15 /usr/share/ca-certificates/minikubeCA.pem
	I0214 21:15:08.967966  278950 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0214 21:15:08.974892  278950 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0214 21:15:08.984627  278950 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0214 21:15:08.988116  278950 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0214 21:15:08.988167  278950 kubeadm.go:392] StartCluster: {Name:addons-794492 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-794492 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0214 21:15:08.988248  278950 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0214 21:15:08.988306  278950 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0214 21:15:09.034236  278950 cri.go:89] found id: ""
	I0214 21:15:09.034314  278950 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0214 21:15:09.043893  278950 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0214 21:15:09.053400  278950 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0214 21:15:09.053477  278950 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0214 21:15:09.063168  278950 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0214 21:15:09.063192  278950 kubeadm.go:157] found existing configuration files:
	
	I0214 21:15:09.063248  278950 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0214 21:15:09.073448  278950 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0214 21:15:09.073518  278950 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0214 21:15:09.082804  278950 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0214 21:15:09.091752  278950 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0214 21:15:09.091845  278950 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0214 21:15:09.100750  278950 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0214 21:15:09.109594  278950 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0214 21:15:09.109688  278950 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0214 21:15:09.118475  278950 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0214 21:15:09.127135  278950 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0214 21:15:09.127245  278950 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0214 21:15:09.136144  278950 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0214 21:15:09.180466  278950 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0214 21:15:09.180530  278950 kubeadm.go:310] [preflight] Running pre-flight checks
	I0214 21:15:09.199036  278950 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0214 21:15:09.199125  278950 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1077-aws
	I0214 21:15:09.199167  278950 kubeadm.go:310] OS: Linux
	I0214 21:15:09.199217  278950 kubeadm.go:310] CGROUPS_CPU: enabled
	I0214 21:15:09.199271  278950 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0214 21:15:09.199323  278950 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0214 21:15:09.199376  278950 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0214 21:15:09.199435  278950 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0214 21:15:09.199487  278950 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0214 21:15:09.199538  278950 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0214 21:15:09.199592  278950 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0214 21:15:09.199646  278950 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0214 21:15:09.262594  278950 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0214 21:15:09.262739  278950 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0214 21:15:09.262834  278950 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0214 21:15:09.269578  278950 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0214 21:15:09.274100  278950 out.go:235]   - Generating certificates and keys ...
	I0214 21:15:09.274315  278950 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0214 21:15:09.274430  278950 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0214 21:15:09.778674  278950 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0214 21:15:10.095761  278950 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0214 21:15:10.429220  278950 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0214 21:15:11.096290  278950 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0214 21:15:11.935254  278950 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0214 21:15:11.935583  278950 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-794492 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0214 21:15:12.508441  278950 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0214 21:15:12.508933  278950 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-794492 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0214 21:15:13.762247  278950 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0214 21:15:14.359941  278950 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0214 21:15:14.880414  278950 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0214 21:15:14.880714  278950 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0214 21:15:15.202146  278950 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0214 21:15:15.969170  278950 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0214 21:15:17.542668  278950 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0214 21:15:18.357337  278950 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0214 21:15:18.874214  278950 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0214 21:15:18.875062  278950 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0214 21:15:18.880302  278950 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0214 21:15:18.883712  278950 out.go:235]   - Booting up control plane ...
	I0214 21:15:18.883819  278950 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0214 21:15:18.883925  278950 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0214 21:15:18.884644  278950 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0214 21:15:18.895136  278950 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0214 21:15:18.901518  278950 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0214 21:15:18.901803  278950 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0214 21:15:18.995666  278950 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0214 21:15:18.995793  278950 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0214 21:15:20.997777  278950 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 2.00181095s
	I0214 21:15:20.997891  278950 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0214 21:15:26.999452  278950 kubeadm.go:310] [api-check] The API server is healthy after 6.002014909s
	I0214 21:15:27.023685  278950 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0214 21:15:27.049715  278950 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0214 21:15:27.081087  278950 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0214 21:15:27.081298  278950 kubeadm.go:310] [mark-control-plane] Marking the node addons-794492 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0214 21:15:27.099396  278950 kubeadm.go:310] [bootstrap-token] Using token: uc94wq.yrhaq874lv44pnzz
	I0214 21:15:27.102422  278950 out.go:235]   - Configuring RBAC rules ...
	I0214 21:15:27.102558  278950 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0214 21:15:27.107483  278950 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0214 21:15:27.118709  278950 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0214 21:15:27.123263  278950 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0214 21:15:27.127477  278950 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0214 21:15:27.131299  278950 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0214 21:15:27.406515  278950 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0214 21:15:27.842664  278950 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0214 21:15:28.411408  278950 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0214 21:15:28.411433  278950 kubeadm.go:310] 
	I0214 21:15:28.411499  278950 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0214 21:15:28.411508  278950 kubeadm.go:310] 
	I0214 21:15:28.411590  278950 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0214 21:15:28.411599  278950 kubeadm.go:310] 
	I0214 21:15:28.411626  278950 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0214 21:15:28.411693  278950 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0214 21:15:28.411778  278950 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0214 21:15:28.411797  278950 kubeadm.go:310] 
	I0214 21:15:28.411853  278950 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0214 21:15:28.411858  278950 kubeadm.go:310] 
	I0214 21:15:28.411906  278950 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0214 21:15:28.411910  278950 kubeadm.go:310] 
	I0214 21:15:28.411967  278950 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0214 21:15:28.412052  278950 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0214 21:15:28.412149  278950 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0214 21:15:28.412164  278950 kubeadm.go:310] 
	I0214 21:15:28.412281  278950 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0214 21:15:28.412370  278950 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0214 21:15:28.412376  278950 kubeadm.go:310] 
	I0214 21:15:28.412475  278950 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token uc94wq.yrhaq874lv44pnzz \
	I0214 21:15:28.412578  278950 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c06c75d7df404df93ce031bbacdfe2f3cd0cfb1441a4d171159ff58cc3179696 \
	I0214 21:15:28.412599  278950 kubeadm.go:310] 	--control-plane 
	I0214 21:15:28.412603  278950 kubeadm.go:310] 
	I0214 21:15:28.412689  278950 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0214 21:15:28.412693  278950 kubeadm.go:310] 
	I0214 21:15:28.412782  278950 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token uc94wq.yrhaq874lv44pnzz \
	I0214 21:15:28.412893  278950 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c06c75d7df404df93ce031bbacdfe2f3cd0cfb1441a4d171159ff58cc3179696 
	I0214 21:15:28.415665  278950 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0214 21:15:28.415899  278950 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1077-aws\n", err: exit status 1
	I0214 21:15:28.416017  278950 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0214 21:15:28.416040  278950 cni.go:84] Creating CNI manager for ""
	I0214 21:15:28.416049  278950 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0214 21:15:28.419141  278950 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0214 21:15:28.422000  278950 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0214 21:15:28.425778  278950 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.1/kubectl ...
	I0214 21:15:28.425799  278950 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0214 21:15:28.443423  278950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0214 21:15:28.717059  278950 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0214 21:15:28.717192  278950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 21:15:28.717271  278950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-794492 minikube.k8s.io/updated_at=2025_02_14T21_15_28_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=fbfc2b66c4da8e6c2b2b8466356b2fbd8038ee5a minikube.k8s.io/name=addons-794492 minikube.k8s.io/primary=true
	I0214 21:15:28.732279  278950 ops.go:34] apiserver oom_adj: -16
	I0214 21:15:28.849449  278950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 21:15:29.350458  278950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 21:15:29.850275  278950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 21:15:30.350068  278950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 21:15:30.849557  278950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 21:15:31.349596  278950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 21:15:31.850234  278950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 21:15:32.349630  278950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 21:15:32.452173  278950 kubeadm.go:1105] duration metric: took 3.735026049s to wait for elevateKubeSystemPrivileges
	I0214 21:15:32.452205  278950 kubeadm.go:394] duration metric: took 23.464041813s to StartCluster
	I0214 21:15:32.452222  278950 settings.go:142] acquiring lock: {Name:mkc0e41ab9ab5cb3c1dd458e58b0ec830c4e7cd4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 21:15:32.452944  278950 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20315-272800/kubeconfig
	I0214 21:15:32.453391  278950 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-272800/kubeconfig: {Name:mke18ca9b25400737f047f62f0239cf4640d5a8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 21:15:32.453592  278950 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0214 21:15:32.453616  278950 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0214 21:15:32.453850  278950 config.go:182] Loaded profile config "addons-794492": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0214 21:15:32.453890  278950 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0214 21:15:32.453974  278950 addons.go:69] Setting yakd=true in profile "addons-794492"
	I0214 21:15:32.453988  278950 addons.go:238] Setting addon yakd=true in "addons-794492"
	I0214 21:15:32.454010  278950 host.go:66] Checking if "addons-794492" exists ...
	I0214 21:15:32.454466  278950 cli_runner.go:164] Run: docker container inspect addons-794492 --format={{.State.Status}}
	I0214 21:15:32.455006  278950 addons.go:69] Setting metrics-server=true in profile "addons-794492"
	I0214 21:15:32.455031  278950 addons.go:238] Setting addon metrics-server=true in "addons-794492"
	I0214 21:15:32.455068  278950 host.go:66] Checking if "addons-794492" exists ...
	I0214 21:15:32.455532  278950 cli_runner.go:164] Run: docker container inspect addons-794492 --format={{.State.Status}}
	I0214 21:15:32.455697  278950 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-794492"
	I0214 21:15:32.455744  278950 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-794492"
	I0214 21:15:32.455782  278950 host.go:66] Checking if "addons-794492" exists ...
	I0214 21:15:32.456234  278950 cli_runner.go:164] Run: docker container inspect addons-794492 --format={{.State.Status}}
	I0214 21:15:32.457779  278950 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-794492"
	I0214 21:15:32.457808  278950 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-794492"
	I0214 21:15:32.457839  278950 host.go:66] Checking if "addons-794492" exists ...
	I0214 21:15:32.459245  278950 cli_runner.go:164] Run: docker container inspect addons-794492 --format={{.State.Status}}
	I0214 21:15:32.459932  278950 addons.go:69] Setting registry=true in profile "addons-794492"
	I0214 21:15:32.462750  278950 addons.go:238] Setting addon registry=true in "addons-794492"
	I0214 21:15:32.462841  278950 host.go:66] Checking if "addons-794492" exists ...
	I0214 21:15:32.460102  278950 addons.go:69] Setting storage-provisioner=true in profile "addons-794492"
	I0214 21:15:32.460116  278950 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-794492"
	I0214 21:15:32.460126  278950 addons.go:69] Setting volcano=true in profile "addons-794492"
	I0214 21:15:32.460132  278950 addons.go:69] Setting volumesnapshots=true in profile "addons-794492"
	I0214 21:15:32.460201  278950 out.go:177] * Verifying Kubernetes components...
	I0214 21:15:32.462477  278950 addons.go:69] Setting ingress=true in profile "addons-794492"
	I0214 21:15:32.462495  278950 addons.go:69] Setting cloud-spanner=true in profile "addons-794492"
	I0214 21:15:32.462499  278950 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-794492"
	I0214 21:15:32.462503  278950 addons.go:69] Setting default-storageclass=true in profile "addons-794492"
	I0214 21:15:32.462506  278950 addons.go:69] Setting gcp-auth=true in profile "addons-794492"
	I0214 21:15:32.462650  278950 addons.go:69] Setting inspektor-gadget=true in profile "addons-794492"
	I0214 21:15:32.462659  278950 addons.go:69] Setting ingress-dns=true in profile "addons-794492"
	I0214 21:15:32.467219  278950 addons.go:238] Setting addon ingress-dns=true in "addons-794492"
	I0214 21:15:32.467296  278950 host.go:66] Checking if "addons-794492" exists ...
	I0214 21:15:32.467824  278950 cli_runner.go:164] Run: docker container inspect addons-794492 --format={{.State.Status}}
	I0214 21:15:32.468684  278950 cli_runner.go:164] Run: docker container inspect addons-794492 --format={{.State.Status}}
	I0214 21:15:32.486697  278950 addons.go:238] Setting addon ingress=true in "addons-794492"
	I0214 21:15:32.486821  278950 host.go:66] Checking if "addons-794492" exists ...
	I0214 21:15:32.488722  278950 cli_runner.go:164] Run: docker container inspect addons-794492 --format={{.State.Status}}
	I0214 21:15:32.489095  278950 addons.go:238] Setting addon storage-provisioner=true in "addons-794492"
	I0214 21:15:32.489162  278950 host.go:66] Checking if "addons-794492" exists ...
	I0214 21:15:32.489637  278950 cli_runner.go:164] Run: docker container inspect addons-794492 --format={{.State.Status}}
	I0214 21:15:32.499297  278950 addons.go:238] Setting addon cloud-spanner=true in "addons-794492"
	I0214 21:15:32.499394  278950 host.go:66] Checking if "addons-794492" exists ...
	I0214 21:15:32.499889  278950 cli_runner.go:164] Run: docker container inspect addons-794492 --format={{.State.Status}}
	I0214 21:15:32.508536  278950 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-794492"
	I0214 21:15:32.508918  278950 cli_runner.go:164] Run: docker container inspect addons-794492 --format={{.State.Status}}
	I0214 21:15:32.527128  278950 addons.go:238] Setting addon volcano=true in "addons-794492"
	I0214 21:15:32.527191  278950 host.go:66] Checking if "addons-794492" exists ...
	I0214 21:15:32.527360  278950 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-794492"
	I0214 21:15:32.527391  278950 host.go:66] Checking if "addons-794492" exists ...
	I0214 21:15:32.527671  278950 cli_runner.go:164] Run: docker container inspect addons-794492 --format={{.State.Status}}
	I0214 21:15:32.527804  278950 cli_runner.go:164] Run: docker container inspect addons-794492 --format={{.State.Status}}
	I0214 21:15:32.565111  278950 addons.go:238] Setting addon volumesnapshots=true in "addons-794492"
	I0214 21:15:32.565185  278950 host.go:66] Checking if "addons-794492" exists ...
	I0214 21:15:32.565815  278950 cli_runner.go:164] Run: docker container inspect addons-794492 --format={{.State.Status}}
	I0214 21:15:32.566021  278950 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0214 21:15:32.570947  278950 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-794492"
	I0214 21:15:32.571457  278950 cli_runner.go:164] Run: docker container inspect addons-794492 --format={{.State.Status}}
	I0214 21:15:32.573964  278950 mustload.go:65] Loading cluster: addons-794492
	I0214 21:15:32.574231  278950 config.go:182] Loaded profile config "addons-794492": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0214 21:15:32.574559  278950 cli_runner.go:164] Run: docker container inspect addons-794492 --format={{.State.Status}}
	I0214 21:15:32.587163  278950 addons.go:238] Setting addon inspektor-gadget=true in "addons-794492"
	I0214 21:15:32.587228  278950 host.go:66] Checking if "addons-794492" exists ...
	I0214 21:15:32.587708  278950 cli_runner.go:164] Run: docker container inspect addons-794492 --format={{.State.Status}}
	I0214 21:15:32.613147  278950 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I0214 21:15:32.619228  278950 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0214 21:15:32.619254  278950 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0214 21:15:32.619326  278950 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-794492
	I0214 21:15:32.672705  278950 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0214 21:15:32.679560  278950 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	W0214 21:15:32.703570  278950 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0214 21:15:32.710553  278950 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0214 21:15:32.714591  278950 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0214 21:15:32.714653  278950 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0214 21:15:32.714753  278950 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-794492
	I0214 21:15:32.715167  278950 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0214 21:15:32.715184  278950 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0214 21:15:32.715228  278950 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-794492
	I0214 21:15:32.736566  278950 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0214 21:15:32.737720  278950 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0214 21:15:32.739344  278950 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0214 21:15:32.739380  278950 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0214 21:15:32.739491  278950 host.go:66] Checking if "addons-794492" exists ...
	I0214 21:15:32.745773  278950 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-794492
	I0214 21:15:32.746338  278950 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0214 21:15:32.746382  278950 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0214 21:15:32.746466  278950 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-794492
	I0214 21:15:32.768821  278950 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0214 21:15:32.773964  278950 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0214 21:15:32.774030  278950 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0214 21:15:32.774125  278950 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-794492
	I0214 21:15:32.774365  278950 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0214 21:15:32.781065  278950 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0214 21:15:32.784316  278950 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.28
	I0214 21:15:32.790190  278950 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I0214 21:15:32.790795  278950 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0214 21:15:32.790812  278950 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0214 21:15:32.790888  278950 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-794492
	I0214 21:15:32.811368  278950 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I0214 21:15:32.812894  278950 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0214 21:15:32.819233  278950 out.go:177]   - Using image docker.io/registry:2.8.3
	I0214 21:15:32.819592  278950 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0214 21:15:32.819626  278950 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0214 21:15:32.819711  278950 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-794492
	I0214 21:15:32.832321  278950 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0214 21:15:32.832342  278950 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0214 21:15:32.832419  278950 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-794492
	I0214 21:15:32.838960  278950 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0214 21:15:32.841982  278950 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0214 21:15:32.843431  278950 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33136 SSHKeyPath:/home/jenkins/minikube-integration/20315-272800/.minikube/machines/addons-794492/id_rsa Username:docker}
	I0214 21:15:32.846034  278950 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.37.0
	I0214 21:15:32.848040  278950 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0214 21:15:32.848068  278950 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0214 21:15:32.848140  278950 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-794492
	I0214 21:15:32.855878  278950 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0214 21:15:32.858783  278950 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0214 21:15:32.861730  278950 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0214 21:15:32.863253  278950 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0214 21:15:32.863273  278950 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I0214 21:15:32.863337  278950 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-794492
	I0214 21:15:32.878615  278950 addons.go:238] Setting addon default-storageclass=true in "addons-794492"
	I0214 21:15:32.878653  278950 host.go:66] Checking if "addons-794492" exists ...
	I0214 21:15:32.879082  278950 cli_runner.go:164] Run: docker container inspect addons-794492 --format={{.State.Status}}
	I0214 21:15:32.881970  278950 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-794492"
	I0214 21:15:32.882013  278950 host.go:66] Checking if "addons-794492" exists ...
	I0214 21:15:32.882423  278950 cli_runner.go:164] Run: docker container inspect addons-794492 --format={{.State.Status}}
	I0214 21:15:32.902856  278950 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0214 21:15:32.906953  278950 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0214 21:15:32.914685  278950 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0214 21:15:32.927448  278950 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0214 21:15:32.927480  278950 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0214 21:15:32.927556  278950 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-794492
	I0214 21:15:32.973580  278950 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33136 SSHKeyPath:/home/jenkins/minikube-integration/20315-272800/.minikube/machines/addons-794492/id_rsa Username:docker}
	I0214 21:15:32.984610  278950 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33136 SSHKeyPath:/home/jenkins/minikube-integration/20315-272800/.minikube/machines/addons-794492/id_rsa Username:docker}
	I0214 21:15:32.986876  278950 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33136 SSHKeyPath:/home/jenkins/minikube-integration/20315-272800/.minikube/machines/addons-794492/id_rsa Username:docker}
	I0214 21:15:33.007282  278950 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33136 SSHKeyPath:/home/jenkins/minikube-integration/20315-272800/.minikube/machines/addons-794492/id_rsa Username:docker}
	I0214 21:15:33.010841  278950 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33136 SSHKeyPath:/home/jenkins/minikube-integration/20315-272800/.minikube/machines/addons-794492/id_rsa Username:docker}
	I0214 21:15:33.031520  278950 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33136 SSHKeyPath:/home/jenkins/minikube-integration/20315-272800/.minikube/machines/addons-794492/id_rsa Username:docker}
	I0214 21:15:33.041228  278950 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33136 SSHKeyPath:/home/jenkins/minikube-integration/20315-272800/.minikube/machines/addons-794492/id_rsa Username:docker}
	I0214 21:15:33.072172  278950 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33136 SSHKeyPath:/home/jenkins/minikube-integration/20315-272800/.minikube/machines/addons-794492/id_rsa Username:docker}
	I0214 21:15:33.072735  278950 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33136 SSHKeyPath:/home/jenkins/minikube-integration/20315-272800/.minikube/machines/addons-794492/id_rsa Username:docker}
	I0214 21:15:33.089242  278950 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0214 21:15:33.089280  278950 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0214 21:15:33.089349  278950 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-794492
	I0214 21:15:33.098134  278950 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33136 SSHKeyPath:/home/jenkins/minikube-integration/20315-272800/.minikube/machines/addons-794492/id_rsa Username:docker}
	I0214 21:15:33.098899  278950 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0214 21:15:33.102237  278950 out.go:177]   - Using image docker.io/busybox:stable
	I0214 21:15:33.105146  278950 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0214 21:15:33.105172  278950 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0214 21:15:33.105237  278950 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-794492
	I0214 21:15:33.119225  278950 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33136 SSHKeyPath:/home/jenkins/minikube-integration/20315-272800/.minikube/machines/addons-794492/id_rsa Username:docker}
	I0214 21:15:33.142609  278950 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33136 SSHKeyPath:/home/jenkins/minikube-integration/20315-272800/.minikube/machines/addons-794492/id_rsa Username:docker}
	I0214 21:15:33.150741  278950 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33136 SSHKeyPath:/home/jenkins/minikube-integration/20315-272800/.minikube/machines/addons-794492/id_rsa Username:docker}
	W0214 21:15:33.152020  278950 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0214 21:15:33.152047  278950 retry.go:31] will retry after 250.511282ms: ssh: handshake failed: EOF
	I0214 21:15:33.166199  278950 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0214 21:15:33.354208  278950 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0214 21:15:33.421576  278950 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0214 21:15:33.441313  278950 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0214 21:15:33.445024  278950 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0214 21:15:33.448835  278950 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0214 21:15:33.448909  278950 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14539 bytes)
	I0214 21:15:33.477654  278950 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0214 21:15:33.522037  278950 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0214 21:15:33.522384  278950 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0214 21:15:33.522436  278950 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0214 21:15:33.565644  278950 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0214 21:15:33.565719  278950 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0214 21:15:33.569739  278950 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0214 21:15:33.569811  278950 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0214 21:15:33.570634  278950 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0214 21:15:33.577107  278950 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0214 21:15:33.585705  278950 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0214 21:15:33.585785  278950 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0214 21:15:33.601862  278950 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0214 21:15:33.601936  278950 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0214 21:15:33.697732  278950 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0214 21:15:33.697805  278950 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0214 21:15:33.706906  278950 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0214 21:15:33.706979  278950 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0214 21:15:33.750226  278950 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0214 21:15:33.750307  278950 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0214 21:15:33.778009  278950 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0214 21:15:33.778085  278950 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0214 21:15:33.784034  278950 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0214 21:15:33.784112  278950 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0214 21:15:33.889455  278950 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0214 21:15:33.889766  278950 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0214 21:15:33.889818  278950 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0214 21:15:33.894048  278950 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0214 21:15:33.956710  278950 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0214 21:15:33.956787  278950 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0214 21:15:33.968314  278950 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0214 21:15:33.968401  278950 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0214 21:15:33.984004  278950 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0214 21:15:33.984082  278950 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0214 21:15:34.033735  278950 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0214 21:15:34.033809  278950 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0214 21:15:34.122901  278950 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0214 21:15:34.128094  278950 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0214 21:15:34.128170  278950 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0214 21:15:34.141416  278950 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0214 21:15:34.141499  278950 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0214 21:15:34.193045  278950 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0214 21:15:34.245101  278950 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0214 21:15:34.245177  278950 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0214 21:15:34.249100  278950 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0214 21:15:34.249180  278950 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0214 21:15:34.355595  278950 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0214 21:15:34.369380  278950 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0214 21:15:34.369454  278950 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0214 21:15:34.459032  278950 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0214 21:15:34.459119  278950 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0214 21:15:34.550059  278950 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0214 21:15:34.550128  278950 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0214 21:15:34.632756  278950 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0214 21:15:34.632835  278950 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0214 21:15:34.809882  278950 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0214 21:15:34.809963  278950 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0214 21:15:34.964890  278950 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0214 21:15:36.493949  278950 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.756195848s)
	I0214 21:15:36.493987  278950 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0214 21:15:36.494974  278950 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.328751902s)
	I0214 21:15:36.495693  278950 node_ready.go:35] waiting up to 6m0s for node "addons-794492" to be "Ready" ...
	I0214 21:15:36.495847  278950 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.141607238s)
	I0214 21:15:36.495880  278950 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.074285188s)
	I0214 21:15:37.252650  278950 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-794492" context rescaled to 1 replicas
	W0214 21:15:38.547072  278950 node_ready.go:57] node "addons-794492" has "Ready":"False" status (will retry)
	I0214 21:15:39.693287  278950 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.251886846s)
	I0214 21:15:39.693322  278950 addons.go:479] Verifying addon ingress=true in "addons-794492"
	I0214 21:15:39.693543  278950 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.248445447s)
	I0214 21:15:39.693589  278950 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.215865475s)
	I0214 21:15:39.693646  278950 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.171541629s)
	I0214 21:15:39.693862  278950 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (6.123165226s)
	I0214 21:15:39.693915  278950 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (6.116739532s)
	I0214 21:15:39.693964  278950 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.804432822s)
	I0214 21:15:39.693984  278950 addons.go:479] Verifying addon registry=true in "addons-794492"
	I0214 21:15:39.694061  278950 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.500940679s)
	I0214 21:15:39.694029  278950 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.57104685s)
	I0214 21:15:39.695834  278950 addons.go:479] Verifying addon metrics-server=true in "addons-794492"
	I0214 21:15:39.694324  278950 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.338646277s)
	W0214 21:15:39.695890  278950 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0214 21:15:39.695913  278950 retry.go:31] will retry after 252.719693ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0214 21:15:39.693970  278950 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.799857267s)
	I0214 21:15:39.696959  278950 out.go:177] * Verifying registry addon...
	I0214 21:15:39.697007  278950 out.go:177] * Verifying ingress addon...
	I0214 21:15:39.698910  278950 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-794492 service yakd-dashboard -n yakd-dashboard
	
	I0214 21:15:39.701716  278950 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0214 21:15:39.702921  278950 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0214 21:15:39.715708  278950 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0214 21:15:39.715737  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0214 21:15:39.715993  278950 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0214 21:15:39.716951  278950 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0214 21:15:39.716977  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:15:39.949164  278950 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0214 21:15:39.989079  278950 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.024073979s)
	I0214 21:15:39.989120  278950 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-794492"
	I0214 21:15:39.994406  278950 out.go:177] * Verifying csi-hostpath-driver addon...
	I0214 21:15:39.997603  278950 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0214 21:15:40.014503  278950 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0214 21:15:40.014581  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:15:40.215976  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:15:40.216490  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:15:40.501463  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:15:40.706170  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:15:40.706248  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0214 21:15:40.999719  278950 node_ready.go:57] node "addons-794492" has "Ready":"False" status (will retry)
	I0214 21:15:41.001542  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:15:41.205560  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:15:41.205677  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:15:41.502305  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:15:41.705034  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:15:41.706073  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:15:42.001021  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:15:42.206541  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:15:42.206757  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:15:42.504839  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:15:42.708297  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:15:42.708649  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:15:42.756652  278950 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.807431982s)
	I0214 21:15:43.001869  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:15:43.206798  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:15:43.207321  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0214 21:15:43.498588  278950 node_ready.go:57] node "addons-794492" has "Ready":"False" status (will retry)
	I0214 21:15:43.500584  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:15:43.551804  278950 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0214 21:15:43.551896  278950 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-794492
	I0214 21:15:43.574312  278950 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33136 SSHKeyPath:/home/jenkins/minikube-integration/20315-272800/.minikube/machines/addons-794492/id_rsa Username:docker}
	I0214 21:15:43.678251  278950 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0214 21:15:43.696544  278950 addons.go:238] Setting addon gcp-auth=true in "addons-794492"
	I0214 21:15:43.696602  278950 host.go:66] Checking if "addons-794492" exists ...
	I0214 21:15:43.697079  278950 cli_runner.go:164] Run: docker container inspect addons-794492 --format={{.State.Status}}
	I0214 21:15:43.708113  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:15:43.708182  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:15:43.714065  278950 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0214 21:15:43.714125  278950 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-794492
	I0214 21:15:43.732424  278950 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33136 SSHKeyPath:/home/jenkins/minikube-integration/20315-272800/.minikube/machines/addons-794492/id_rsa Username:docker}
	I0214 21:15:43.837653  278950 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0214 21:15:43.840668  278950 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0214 21:15:43.843551  278950 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0214 21:15:43.843583  278950 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0214 21:15:43.863126  278950 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0214 21:15:43.863152  278950 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0214 21:15:43.881870  278950 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0214 21:15:43.881897  278950 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0214 21:15:43.899887  278950 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0214 21:15:44.000354  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:15:44.206741  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:15:44.207782  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:15:44.408644  278950 addons.go:479] Verifying addon gcp-auth=true in "addons-794492"
	I0214 21:15:44.411695  278950 out.go:177] * Verifying gcp-auth addon...
	I0214 21:15:44.416123  278950 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0214 21:15:44.419744  278950 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0214 21:15:44.419769  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:15:44.501011  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:15:44.705590  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:15:44.706626  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:15:44.919874  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:15:45.000686  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:15:45.206063  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:15:45.206286  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:15:45.419245  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0214 21:15:45.499209  278950 node_ready.go:57] node "addons-794492" has "Ready":"False" status (will retry)
	I0214 21:15:45.501135  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:15:45.705193  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:15:45.706957  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:15:45.919830  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:15:46.000979  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:15:46.205501  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:15:46.206018  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:15:46.419833  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:15:46.500776  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:15:46.705059  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:15:46.707310  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:15:46.920890  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:15:47.021365  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:15:47.206279  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:15:47.206380  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:15:47.419451  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:15:47.501872  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:15:47.704926  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:15:47.707172  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:15:47.919622  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0214 21:15:47.999522  278950 node_ready.go:57] node "addons-794492" has "Ready":"False" status (will retry)
	I0214 21:15:48.000531  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:15:48.204612  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:15:48.205237  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:15:48.419641  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:15:48.500840  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:15:48.704700  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:15:48.706131  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:15:48.919307  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:15:49.000773  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:15:49.204958  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:15:49.206250  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:15:49.419335  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:15:49.506034  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:15:49.705046  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:15:49.706805  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:15:49.919234  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:15:50.000943  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:15:50.206698  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:15:50.207323  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:15:50.418834  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0214 21:15:50.498661  278950 node_ready.go:57] node "addons-794492" has "Ready":"False" status (will retry)
	I0214 21:15:50.500557  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:15:50.705570  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:15:50.705960  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:15:50.920114  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:15:51.001490  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:15:51.205643  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:15:51.206020  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:15:51.419720  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:15:51.500543  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:15:51.710695  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:15:51.712816  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:15:51.919949  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:15:52.001034  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:15:52.206346  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:15:52.206502  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:15:52.419355  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0214 21:15:52.498982  278950 node_ready.go:57] node "addons-794492" has "Ready":"False" status (will retry)
	I0214 21:15:52.501196  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:15:52.705031  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:15:52.706094  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:15:52.919614  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:15:53.000527  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:15:53.205503  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:15:53.206443  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:15:53.419406  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:15:53.500602  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:15:53.704763  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:15:53.705589  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:15:53.919724  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:15:54.000228  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:15:54.206083  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:15:54.206515  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:15:54.419387  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0214 21:15:54.499484  278950 node_ready.go:57] node "addons-794492" has "Ready":"False" status (will retry)
	I0214 21:15:54.501378  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:15:54.705709  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:15:54.705755  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:15:54.919808  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:15:55.020883  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:15:55.206391  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:15:55.206464  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:15:55.419431  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:15:55.501195  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:15:55.705358  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:15:55.705928  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:15:55.918983  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:15:56.000873  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:15:56.205784  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:15:56.205933  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:15:56.418872  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:15:56.500801  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:15:56.706539  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:15:56.706753  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:15:56.920068  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0214 21:15:56.999128  278950 node_ready.go:57] node "addons-794492" has "Ready":"False" status (will retry)
	I0214 21:15:57.001441  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:15:57.205811  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:15:57.206074  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:15:57.419198  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:15:57.502906  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:15:57.705503  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:15:57.708179  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:15:57.919573  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:15:58.000412  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:15:58.206170  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:15:58.206612  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:15:58.420307  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:15:58.500965  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:15:58.705866  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:15:58.706087  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:15:58.919585  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:15:58.999974  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:15:59.205086  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:15:59.205798  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:15:59.419755  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0214 21:15:59.498421  278950 node_ready.go:57] node "addons-794492" has "Ready":"False" status (will retry)
	I0214 21:15:59.500634  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:15:59.705836  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:15:59.706236  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:15:59.918873  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:00.000977  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:00.206220  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:00.217137  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:00.420170  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:00.501559  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:00.705595  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:00.706156  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:00.918940  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:01.001262  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:01.205917  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:01.206667  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:01.419759  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0214 21:16:01.499623  278950 node_ready.go:57] node "addons-794492" has "Ready":"False" status (will retry)
	I0214 21:16:01.500660  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:01.704897  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:01.706393  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:01.919352  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:02.002172  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:02.205427  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:02.205965  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:02.419885  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:02.501838  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:02.704504  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:02.707221  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:02.919454  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:03.001493  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:03.206129  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:03.206279  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:03.419244  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0214 21:16:03.500914  278950 node_ready.go:57] node "addons-794492" has "Ready":"False" status (will retry)
	I0214 21:16:03.502259  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:03.705359  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:03.706119  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:03.919124  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:04.001244  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:04.206108  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:04.206265  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:04.419342  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:04.501285  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:04.706350  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:04.706627  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:04.919685  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:05.000559  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:05.205906  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:05.206129  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:05.419028  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:05.500863  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:05.704463  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:05.705354  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:05.919396  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0214 21:16:05.999082  278950 node_ready.go:57] node "addons-794492" has "Ready":"False" status (will retry)
	I0214 21:16:06.001034  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:06.206318  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:06.206486  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:06.419504  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:06.501631  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:06.706922  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:06.707110  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:06.919804  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:07.000849  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:07.204760  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:07.206137  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:07.419315  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:07.501488  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:07.706193  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:07.706522  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:07.919131  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:08.000985  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:08.205415  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:08.205947  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:08.420662  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0214 21:16:08.498589  278950 node_ready.go:57] node "addons-794492" has "Ready":"False" status (will retry)
	I0214 21:16:08.500767  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:08.704814  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:08.706072  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:08.919749  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:09.001289  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:09.205851  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:09.206030  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:09.419295  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:09.500345  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:09.705588  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:09.705836  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:09.919750  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:10.001065  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:10.204996  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:10.205988  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:10.418953  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0214 21:16:10.498818  278950 node_ready.go:57] node "addons-794492" has "Ready":"False" status (will retry)
	I0214 21:16:10.500895  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:10.704742  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:10.706217  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:10.919076  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:11.001310  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:11.205844  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:11.206239  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:11.419105  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:11.501486  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:11.705915  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:11.706243  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:11.919169  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:12.001379  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:12.205582  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:12.205814  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:12.419693  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0214 21:16:12.499300  278950 node_ready.go:57] node "addons-794492" has "Ready":"False" status (will retry)
	I0214 21:16:12.502189  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:12.706374  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:12.707295  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:12.919460  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:13.000772  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:13.204911  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:13.205277  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:13.419657  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:13.500687  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:13.704915  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:13.706589  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:13.920010  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:14.001655  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:14.204788  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:14.205273  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:14.419168  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0214 21:16:14.501962  278950 node_ready.go:57] node "addons-794492" has "Ready":"False" status (will retry)
	I0214 21:16:14.506223  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:14.705377  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:14.706592  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:14.919593  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:15.000723  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:15.204556  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:15.205636  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:15.419228  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:15.501340  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:15.705808  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:15.706109  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:15.919634  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:16.000935  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:16.205250  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:16.205897  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:16.419832  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:16.501045  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:16.706492  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:16.706706  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:16.919839  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0214 21:16:16.999485  278950 node_ready.go:57] node "addons-794492" has "Ready":"False" status (will retry)
	I0214 21:16:17.001063  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:17.204935  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:17.206040  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:17.419870  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:17.502016  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:17.706408  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:17.706640  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:17.919773  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:18.012266  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:18.204674  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:18.205936  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:18.420125  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:18.500944  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:18.704649  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:18.705870  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:18.920245  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:19.001105  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:19.241572  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:19.242219  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:19.422991  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:19.584253  278950 node_ready.go:49] node "addons-794492" is "Ready"
	I0214 21:16:19.584284  278950 node_ready.go:38] duration metric: took 43.088574608s for node "addons-794492" to be "Ready" ...
	I0214 21:16:19.584299  278950 api_server.go:52] waiting for apiserver process to appear ...
	I0214 21:16:19.584365  278950 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:16:19.600430  278950 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0214 21:16:19.600458  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:19.614025  278950 api_server.go:72] duration metric: took 47.16037921s to wait for apiserver process to appear ...
	I0214 21:16:19.614053  278950 api_server.go:88] waiting for apiserver healthz status ...
	I0214 21:16:19.614075  278950 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0214 21:16:19.638372  278950 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0214 21:16:19.640332  278950 api_server.go:141] control plane version: v1.32.1
	I0214 21:16:19.640363  278950 api_server.go:131] duration metric: took 26.302062ms to wait for apiserver health ...
	I0214 21:16:19.640373  278950 system_pods.go:43] waiting for kube-system pods to appear ...
	I0214 21:16:19.652453  278950 system_pods.go:59] 18 kube-system pods found
	I0214 21:16:19.652494  278950 system_pods.go:61] "coredns-668d6bf9bc-5gz8n" [1d018d1a-148b-46df-9509-fdd6c589f988] Pending
	I0214 21:16:19.652505  278950 system_pods.go:61] "csi-hostpath-attacher-0" [7c65fb69-fc99-42d4-acf2-df0284935208] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0214 21:16:19.652513  278950 system_pods.go:61] "csi-hostpath-resizer-0" [053692fc-e83e-457e-9e75-46da467ab5b7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0214 21:16:19.652518  278950 system_pods.go:61] "csi-hostpathplugin-4rrtf" [0f0d93d8-304b-4909-91fe-370108992ddd] Pending
	I0214 21:16:19.652522  278950 system_pods.go:61] "etcd-addons-794492" [7c7c60a9-5797-46ec-87e6-e9e230ef6422] Running
	I0214 21:16:19.652526  278950 system_pods.go:61] "kindnet-kmmf5" [92c85051-27f4-4808-8d9d-256ec7687227] Running
	I0214 21:16:19.652530  278950 system_pods.go:61] "kube-apiserver-addons-794492" [859a1c5e-d14e-463e-b26b-65ebf7162297] Running
	I0214 21:16:19.652534  278950 system_pods.go:61] "kube-controller-manager-addons-794492" [b4926128-9fbe-4d4d-93e9-3d7399026f27] Running
	I0214 21:16:19.652543  278950 system_pods.go:61] "kube-ingress-dns-minikube" [1dd18bf3-445a-4e4c-9ba4-2af7a770be42] Pending
	I0214 21:16:19.652547  278950 system_pods.go:61] "kube-proxy-xqxb9" [749fb668-de12-48c7-93f5-6ca522c92ee7] Running
	I0214 21:16:19.652551  278950 system_pods.go:61] "kube-scheduler-addons-794492" [a51e8dae-d844-46f8-84e9-28075f9830fe] Running
	I0214 21:16:19.652562  278950 system_pods.go:61] "metrics-server-7fbb699795-mww6g" [6b44cf41-f1cb-4c1b-b2b3-ef3f5d40d6f0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0214 21:16:19.652566  278950 system_pods.go:61] "nvidia-device-plugin-daemonset-8f2xh" [508118fd-f3b6-4f76-819c-6fe2f0fd0e81] Pending
	I0214 21:16:19.652579  278950 system_pods.go:61] "registry-6c88467877-4ntpr" [c896f503-30c4-4427-b501-d736eb1a7d4f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0214 21:16:19.652583  278950 system_pods.go:61] "registry-proxy-lvdl4" [2e950ccc-848b-4461-bebb-8258a3ed7a24] Pending
	I0214 21:16:19.652589  278950 system_pods.go:61] "snapshot-controller-68b874b76f-58hm5" [8edbff2b-ac98-4ee4-8336-ebc4f581f723] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0214 21:16:19.652599  278950 system_pods.go:61] "snapshot-controller-68b874b76f-slfpz" [f5838d2f-d408-4207-a17d-a0116241e95c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0214 21:16:19.652606  278950 system_pods.go:61] "storage-provisioner" [afd266b0-9108-4c38-89ee-5460df7b3d14] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0214 21:16:19.652612  278950 system_pods.go:74] duration metric: took 12.233243ms to wait for pod list to return data ...
	I0214 21:16:19.652622  278950 default_sa.go:34] waiting for default service account to be created ...
	I0214 21:16:19.664297  278950 default_sa.go:45] found service account: "default"
	I0214 21:16:19.664332  278950 default_sa.go:55] duration metric: took 11.703542ms for default service account to be created ...
	I0214 21:16:19.664344  278950 system_pods.go:116] waiting for k8s-apps to be running ...
	I0214 21:16:19.669645  278950 system_pods.go:86] 18 kube-system pods found
	I0214 21:16:19.669681  278950 system_pods.go:89] "coredns-668d6bf9bc-5gz8n" [1d018d1a-148b-46df-9509-fdd6c589f988] Pending
	I0214 21:16:19.669691  278950 system_pods.go:89] "csi-hostpath-attacher-0" [7c65fb69-fc99-42d4-acf2-df0284935208] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0214 21:16:19.669698  278950 system_pods.go:89] "csi-hostpath-resizer-0" [053692fc-e83e-457e-9e75-46da467ab5b7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0214 21:16:19.669703  278950 system_pods.go:89] "csi-hostpathplugin-4rrtf" [0f0d93d8-304b-4909-91fe-370108992ddd] Pending
	I0214 21:16:19.669708  278950 system_pods.go:89] "etcd-addons-794492" [7c7c60a9-5797-46ec-87e6-e9e230ef6422] Running
	I0214 21:16:19.669712  278950 system_pods.go:89] "kindnet-kmmf5" [92c85051-27f4-4808-8d9d-256ec7687227] Running
	I0214 21:16:19.669717  278950 system_pods.go:89] "kube-apiserver-addons-794492" [859a1c5e-d14e-463e-b26b-65ebf7162297] Running
	I0214 21:16:19.669721  278950 system_pods.go:89] "kube-controller-manager-addons-794492" [b4926128-9fbe-4d4d-93e9-3d7399026f27] Running
	I0214 21:16:19.669728  278950 system_pods.go:89] "kube-ingress-dns-minikube" [1dd18bf3-445a-4e4c-9ba4-2af7a770be42] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0214 21:16:19.669736  278950 system_pods.go:89] "kube-proxy-xqxb9" [749fb668-de12-48c7-93f5-6ca522c92ee7] Running
	I0214 21:16:19.669743  278950 system_pods.go:89] "kube-scheduler-addons-794492" [a51e8dae-d844-46f8-84e9-28075f9830fe] Running
	I0214 21:16:19.669752  278950 system_pods.go:89] "metrics-server-7fbb699795-mww6g" [6b44cf41-f1cb-4c1b-b2b3-ef3f5d40d6f0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0214 21:16:19.669756  278950 system_pods.go:89] "nvidia-device-plugin-daemonset-8f2xh" [508118fd-f3b6-4f76-819c-6fe2f0fd0e81] Pending
	I0214 21:16:19.669771  278950 system_pods.go:89] "registry-6c88467877-4ntpr" [c896f503-30c4-4427-b501-d736eb1a7d4f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0214 21:16:19.669776  278950 system_pods.go:89] "registry-proxy-lvdl4" [2e950ccc-848b-4461-bebb-8258a3ed7a24] Pending
	I0214 21:16:19.669789  278950 system_pods.go:89] "snapshot-controller-68b874b76f-58hm5" [8edbff2b-ac98-4ee4-8336-ebc4f581f723] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0214 21:16:19.669797  278950 system_pods.go:89] "snapshot-controller-68b874b76f-slfpz" [f5838d2f-d408-4207-a17d-a0116241e95c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0214 21:16:19.669807  278950 system_pods.go:89] "storage-provisioner" [afd266b0-9108-4c38-89ee-5460df7b3d14] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0214 21:16:19.669822  278950 retry.go:31] will retry after 253.910318ms: missing components: kube-dns
	I0214 21:16:19.708063  278950 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0214 21:16:19.708085  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:19.712563  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:19.923703  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:20.026941  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:20.032924  278950 system_pods.go:86] 18 kube-system pods found
	I0214 21:16:20.032964  278950 system_pods.go:89] "coredns-668d6bf9bc-5gz8n" [1d018d1a-148b-46df-9509-fdd6c589f988] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 21:16:20.032973  278950 system_pods.go:89] "csi-hostpath-attacher-0" [7c65fb69-fc99-42d4-acf2-df0284935208] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0214 21:16:20.032983  278950 system_pods.go:89] "csi-hostpath-resizer-0" [053692fc-e83e-457e-9e75-46da467ab5b7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0214 21:16:20.032993  278950 system_pods.go:89] "csi-hostpathplugin-4rrtf" [0f0d93d8-304b-4909-91fe-370108992ddd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0214 21:16:20.032998  278950 system_pods.go:89] "etcd-addons-794492" [7c7c60a9-5797-46ec-87e6-e9e230ef6422] Running
	I0214 21:16:20.033003  278950 system_pods.go:89] "kindnet-kmmf5" [92c85051-27f4-4808-8d9d-256ec7687227] Running
	I0214 21:16:20.033008  278950 system_pods.go:89] "kube-apiserver-addons-794492" [859a1c5e-d14e-463e-b26b-65ebf7162297] Running
	I0214 21:16:20.033013  278950 system_pods.go:89] "kube-controller-manager-addons-794492" [b4926128-9fbe-4d4d-93e9-3d7399026f27] Running
	I0214 21:16:20.033021  278950 system_pods.go:89] "kube-ingress-dns-minikube" [1dd18bf3-445a-4e4c-9ba4-2af7a770be42] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0214 21:16:20.033026  278950 system_pods.go:89] "kube-proxy-xqxb9" [749fb668-de12-48c7-93f5-6ca522c92ee7] Running
	I0214 21:16:20.033035  278950 system_pods.go:89] "kube-scheduler-addons-794492" [a51e8dae-d844-46f8-84e9-28075f9830fe] Running
	I0214 21:16:20.033041  278950 system_pods.go:89] "metrics-server-7fbb699795-mww6g" [6b44cf41-f1cb-4c1b-b2b3-ef3f5d40d6f0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0214 21:16:20.033053  278950 system_pods.go:89] "nvidia-device-plugin-daemonset-8f2xh" [508118fd-f3b6-4f76-819c-6fe2f0fd0e81] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0214 21:16:20.033061  278950 system_pods.go:89] "registry-6c88467877-4ntpr" [c896f503-30c4-4427-b501-d736eb1a7d4f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0214 21:16:20.033071  278950 system_pods.go:89] "registry-proxy-lvdl4" [2e950ccc-848b-4461-bebb-8258a3ed7a24] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0214 21:16:20.033079  278950 system_pods.go:89] "snapshot-controller-68b874b76f-58hm5" [8edbff2b-ac98-4ee4-8336-ebc4f581f723] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0214 21:16:20.033091  278950 system_pods.go:89] "snapshot-controller-68b874b76f-slfpz" [f5838d2f-d408-4207-a17d-a0116241e95c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0214 21:16:20.033097  278950 system_pods.go:89] "storage-provisioner" [afd266b0-9108-4c38-89ee-5460df7b3d14] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0214 21:16:20.033111  278950 retry.go:31] will retry after 303.156553ms: missing components: kube-dns
	I0214 21:16:20.234687  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:20.238491  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:20.350996  278950 system_pods.go:86] 18 kube-system pods found
	I0214 21:16:20.351038  278950 system_pods.go:89] "coredns-668d6bf9bc-5gz8n" [1d018d1a-148b-46df-9509-fdd6c589f988] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 21:16:20.351048  278950 system_pods.go:89] "csi-hostpath-attacher-0" [7c65fb69-fc99-42d4-acf2-df0284935208] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0214 21:16:20.351109  278950 system_pods.go:89] "csi-hostpath-resizer-0" [053692fc-e83e-457e-9e75-46da467ab5b7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0214 21:16:20.351118  278950 system_pods.go:89] "csi-hostpathplugin-4rrtf" [0f0d93d8-304b-4909-91fe-370108992ddd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0214 21:16:20.351124  278950 system_pods.go:89] "etcd-addons-794492" [7c7c60a9-5797-46ec-87e6-e9e230ef6422] Running
	I0214 21:16:20.351130  278950 system_pods.go:89] "kindnet-kmmf5" [92c85051-27f4-4808-8d9d-256ec7687227] Running
	I0214 21:16:20.351135  278950 system_pods.go:89] "kube-apiserver-addons-794492" [859a1c5e-d14e-463e-b26b-65ebf7162297] Running
	I0214 21:16:20.351139  278950 system_pods.go:89] "kube-controller-manager-addons-794492" [b4926128-9fbe-4d4d-93e9-3d7399026f27] Running
	I0214 21:16:20.351147  278950 system_pods.go:89] "kube-ingress-dns-minikube" [1dd18bf3-445a-4e4c-9ba4-2af7a770be42] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0214 21:16:20.351156  278950 system_pods.go:89] "kube-proxy-xqxb9" [749fb668-de12-48c7-93f5-6ca522c92ee7] Running
	I0214 21:16:20.351167  278950 system_pods.go:89] "kube-scheduler-addons-794492" [a51e8dae-d844-46f8-84e9-28075f9830fe] Running
	I0214 21:16:20.351174  278950 system_pods.go:89] "metrics-server-7fbb699795-mww6g" [6b44cf41-f1cb-4c1b-b2b3-ef3f5d40d6f0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0214 21:16:20.351189  278950 system_pods.go:89] "nvidia-device-plugin-daemonset-8f2xh" [508118fd-f3b6-4f76-819c-6fe2f0fd0e81] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0214 21:16:20.351196  278950 system_pods.go:89] "registry-6c88467877-4ntpr" [c896f503-30c4-4427-b501-d736eb1a7d4f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0214 21:16:20.351203  278950 system_pods.go:89] "registry-proxy-lvdl4" [2e950ccc-848b-4461-bebb-8258a3ed7a24] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0214 21:16:20.351213  278950 system_pods.go:89] "snapshot-controller-68b874b76f-58hm5" [8edbff2b-ac98-4ee4-8336-ebc4f581f723] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0214 21:16:20.351220  278950 system_pods.go:89] "snapshot-controller-68b874b76f-slfpz" [f5838d2f-d408-4207-a17d-a0116241e95c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0214 21:16:20.351231  278950 system_pods.go:89] "storage-provisioner" [afd266b0-9108-4c38-89ee-5460df7b3d14] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0214 21:16:20.351247  278950 retry.go:31] will retry after 293.303333ms: missing components: kube-dns
	I0214 21:16:20.445786  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:20.545778  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:20.652390  278950 system_pods.go:86] 18 kube-system pods found
	I0214 21:16:20.652433  278950 system_pods.go:89] "coredns-668d6bf9bc-5gz8n" [1d018d1a-148b-46df-9509-fdd6c589f988] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 21:16:20.652443  278950 system_pods.go:89] "csi-hostpath-attacher-0" [7c65fb69-fc99-42d4-acf2-df0284935208] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0214 21:16:20.652451  278950 system_pods.go:89] "csi-hostpath-resizer-0" [053692fc-e83e-457e-9e75-46da467ab5b7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0214 21:16:20.652491  278950 system_pods.go:89] "csi-hostpathplugin-4rrtf" [0f0d93d8-304b-4909-91fe-370108992ddd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0214 21:16:20.652510  278950 system_pods.go:89] "etcd-addons-794492" [7c7c60a9-5797-46ec-87e6-e9e230ef6422] Running
	I0214 21:16:20.652517  278950 system_pods.go:89] "kindnet-kmmf5" [92c85051-27f4-4808-8d9d-256ec7687227] Running
	I0214 21:16:20.652526  278950 system_pods.go:89] "kube-apiserver-addons-794492" [859a1c5e-d14e-463e-b26b-65ebf7162297] Running
	I0214 21:16:20.652531  278950 system_pods.go:89] "kube-controller-manager-addons-794492" [b4926128-9fbe-4d4d-93e9-3d7399026f27] Running
	I0214 21:16:20.652539  278950 system_pods.go:89] "kube-ingress-dns-minikube" [1dd18bf3-445a-4e4c-9ba4-2af7a770be42] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0214 21:16:20.652564  278950 system_pods.go:89] "kube-proxy-xqxb9" [749fb668-de12-48c7-93f5-6ca522c92ee7] Running
	I0214 21:16:20.652576  278950 system_pods.go:89] "kube-scheduler-addons-794492" [a51e8dae-d844-46f8-84e9-28075f9830fe] Running
	I0214 21:16:20.652585  278950 system_pods.go:89] "metrics-server-7fbb699795-mww6g" [6b44cf41-f1cb-4c1b-b2b3-ef3f5d40d6f0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0214 21:16:20.652594  278950 system_pods.go:89] "nvidia-device-plugin-daemonset-8f2xh" [508118fd-f3b6-4f76-819c-6fe2f0fd0e81] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0214 21:16:20.652602  278950 system_pods.go:89] "registry-6c88467877-4ntpr" [c896f503-30c4-4427-b501-d736eb1a7d4f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0214 21:16:20.652614  278950 system_pods.go:89] "registry-proxy-lvdl4" [2e950ccc-848b-4461-bebb-8258a3ed7a24] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0214 21:16:20.652621  278950 system_pods.go:89] "snapshot-controller-68b874b76f-58hm5" [8edbff2b-ac98-4ee4-8336-ebc4f581f723] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0214 21:16:20.652659  278950 system_pods.go:89] "snapshot-controller-68b874b76f-slfpz" [f5838d2f-d408-4207-a17d-a0116241e95c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0214 21:16:20.652674  278950 system_pods.go:89] "storage-provisioner" [afd266b0-9108-4c38-89ee-5460df7b3d14] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0214 21:16:20.652691  278950 retry.go:31] will retry after 596.437552ms: missing components: kube-dns
	I0214 21:16:20.705776  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:20.706144  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:20.922829  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:21.013973  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:21.205110  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:21.206237  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:21.256850  278950 system_pods.go:86] 18 kube-system pods found
	I0214 21:16:21.256931  278950 system_pods.go:89] "coredns-668d6bf9bc-5gz8n" [1d018d1a-148b-46df-9509-fdd6c589f988] Running
	I0214 21:16:21.256958  278950 system_pods.go:89] "csi-hostpath-attacher-0" [7c65fb69-fc99-42d4-acf2-df0284935208] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0214 21:16:21.256998  278950 system_pods.go:89] "csi-hostpath-resizer-0" [053692fc-e83e-457e-9e75-46da467ab5b7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0214 21:16:21.257025  278950 system_pods.go:89] "csi-hostpathplugin-4rrtf" [0f0d93d8-304b-4909-91fe-370108992ddd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0214 21:16:21.257042  278950 system_pods.go:89] "etcd-addons-794492" [7c7c60a9-5797-46ec-87e6-e9e230ef6422] Running
	I0214 21:16:21.257065  278950 system_pods.go:89] "kindnet-kmmf5" [92c85051-27f4-4808-8d9d-256ec7687227] Running
	I0214 21:16:21.257103  278950 system_pods.go:89] "kube-apiserver-addons-794492" [859a1c5e-d14e-463e-b26b-65ebf7162297] Running
	I0214 21:16:21.257123  278950 system_pods.go:89] "kube-controller-manager-addons-794492" [b4926128-9fbe-4d4d-93e9-3d7399026f27] Running
	I0214 21:16:21.257146  278950 system_pods.go:89] "kube-ingress-dns-minikube" [1dd18bf3-445a-4e4c-9ba4-2af7a770be42] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0214 21:16:21.257181  278950 system_pods.go:89] "kube-proxy-xqxb9" [749fb668-de12-48c7-93f5-6ca522c92ee7] Running
	I0214 21:16:21.257200  278950 system_pods.go:89] "kube-scheduler-addons-794492" [a51e8dae-d844-46f8-84e9-28075f9830fe] Running
	I0214 21:16:21.257224  278950 system_pods.go:89] "metrics-server-7fbb699795-mww6g" [6b44cf41-f1cb-4c1b-b2b3-ef3f5d40d6f0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0214 21:16:21.257259  278950 system_pods.go:89] "nvidia-device-plugin-daemonset-8f2xh" [508118fd-f3b6-4f76-819c-6fe2f0fd0e81] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0214 21:16:21.257288  278950 system_pods.go:89] "registry-6c88467877-4ntpr" [c896f503-30c4-4427-b501-d736eb1a7d4f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0214 21:16:21.257312  278950 system_pods.go:89] "registry-proxy-lvdl4" [2e950ccc-848b-4461-bebb-8258a3ed7a24] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0214 21:16:21.257350  278950 system_pods.go:89] "snapshot-controller-68b874b76f-58hm5" [8edbff2b-ac98-4ee4-8336-ebc4f581f723] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0214 21:16:21.257377  278950 system_pods.go:89] "snapshot-controller-68b874b76f-slfpz" [f5838d2f-d408-4207-a17d-a0116241e95c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0214 21:16:21.257398  278950 system_pods.go:89] "storage-provisioner" [afd266b0-9108-4c38-89ee-5460df7b3d14] Running
	I0214 21:16:21.257436  278950 system_pods.go:126] duration metric: took 1.593084728s to wait for k8s-apps to be running ...
	I0214 21:16:21.257464  278950 system_svc.go:44] waiting for kubelet service to be running ....
	I0214 21:16:21.257558  278950 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0214 21:16:21.282650  278950 system_svc.go:56] duration metric: took 25.176473ms WaitForService to wait for kubelet
	I0214 21:16:21.282733  278950 kubeadm.go:578] duration metric: took 48.829090863s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0214 21:16:21.282789  278950 node_conditions.go:102] verifying NodePressure condition ...
	I0214 21:16:21.286752  278950 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0214 21:16:21.286837  278950 node_conditions.go:123] node cpu capacity is 2
	I0214 21:16:21.286869  278950 node_conditions.go:105] duration metric: took 4.060616ms to run NodePressure ...
	I0214 21:16:21.286912  278950 start.go:241] waiting for startup goroutines ...
	I0214 21:16:21.420414  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:21.509101  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:21.707893  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:21.708415  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:21.920000  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:22.023811  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:22.206214  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:22.206415  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:22.419389  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:22.501698  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:22.705364  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:22.705388  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:22.923625  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:23.022365  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:23.207110  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:23.207621  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:23.419606  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:23.501670  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:23.706093  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:23.707244  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:23.920457  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:24.002687  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:24.207072  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:24.207867  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:24.420359  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:24.521426  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:24.705785  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:24.707437  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:24.922689  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:25.025905  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:25.204922  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:25.206045  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:25.420334  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:25.501267  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:25.707400  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:25.707864  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:25.921294  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:26.021292  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:26.206685  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:26.206974  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:26.419751  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:26.501068  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:26.706727  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:26.707258  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:26.919334  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:27.001955  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:27.205948  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:27.206312  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:27.419299  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:27.501643  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:27.708712  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:27.708842  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:27.919284  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:28.001519  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:28.204600  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:28.206464  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:28.419415  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:28.502393  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:28.707877  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:28.708187  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:28.920400  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:29.001339  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:29.207186  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:29.208907  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:29.420732  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:29.501133  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:29.707584  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:29.707986  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:29.919221  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:30.003036  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:30.206781  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:30.208093  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:30.419483  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:30.501664  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:30.708139  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:30.709306  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:30.919138  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:31.001401  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:31.209957  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:31.210452  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:31.419363  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:31.502715  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:31.706133  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:31.708194  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:31.920506  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:32.003700  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:32.207410  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:32.208795  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:32.420259  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:32.502019  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:32.708317  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:32.708781  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:32.925235  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:33.002349  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:33.207648  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:33.208175  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:33.424234  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:33.521898  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:33.708803  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:33.709201  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:33.921874  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:34.012068  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:34.205189  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:34.206590  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:34.419618  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:34.500767  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:34.708515  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:34.708723  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:34.919579  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:35.001991  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:35.208402  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:35.209052  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:35.420085  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:35.501735  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:35.707315  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:35.707592  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:35.919965  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:36.002836  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:36.207076  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:36.207759  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:36.419830  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:36.500963  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:36.709956  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:36.712261  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:36.919791  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:37.032987  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:37.206450  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:37.208843  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:37.419918  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:37.501315  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:37.704905  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:37.706113  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:37.919088  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:38.014456  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:38.206263  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:38.206270  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:38.419258  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:38.502075  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:38.706077  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:38.706162  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:38.920207  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:39.007424  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:39.205825  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:39.207198  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:39.419444  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:39.501705  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:39.704927  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:39.706996  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:39.918799  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:40.001550  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:40.207944  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:40.208319  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:40.419174  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:40.501599  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:40.708569  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:40.708915  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:40.919923  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:41.001664  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:41.206767  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:41.207306  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:41.419268  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:41.501409  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:41.707759  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:41.708161  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:41.919589  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:42.006253  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:42.211950  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:42.212258  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:42.420197  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:42.502930  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:42.707939  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:42.708399  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:42.920666  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:43.001477  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:43.207490  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:43.207856  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:43.420347  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:43.502506  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:43.709191  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:43.709222  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:43.921096  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:44.001048  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:44.205352  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:44.206081  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:44.419883  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:44.500600  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:44.705733  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:44.707236  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:44.919306  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:45.002031  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:45.213377  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:45.213614  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:45.419582  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:45.504353  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:45.713187  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:45.713728  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:45.920140  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:46.001881  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:46.208887  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:46.209392  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:46.421492  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:46.504792  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:46.717781  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:46.720480  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:46.927736  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:47.003018  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:47.211647  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:47.212024  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:47.419220  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:47.501641  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:47.706936  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:47.707962  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:47.921853  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:48.022966  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:48.206340  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:48.206545  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:48.419802  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:48.502261  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:48.706023  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:48.706895  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:48.919047  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:49.001791  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:49.205427  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:49.206735  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:49.421194  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:49.503460  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:49.707074  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:49.707122  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:49.921598  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:50.001840  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:50.206972  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:50.209324  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:50.419690  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:50.501492  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:50.705000  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:50.706078  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:50.920427  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:51.001407  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:51.206159  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:51.206440  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:51.419390  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:51.501903  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:51.704864  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:51.706081  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:51.919736  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:52.000551  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:52.206177  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:52.206287  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:52.419857  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:52.500871  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:52.705410  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:52.706615  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:52.921359  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:53.001482  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:53.206860  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:53.206984  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:53.419461  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:53.501653  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:53.705842  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:53.707133  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:53.919299  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:54.002137  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:54.205251  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:54.207697  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:54.419674  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:54.500649  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:54.705571  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:54.705750  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:54.919573  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:55.000550  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:55.204373  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:55.207582  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:55.419417  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:55.501360  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:55.705209  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:55.708322  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:55.919270  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:56.001435  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:56.206225  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:56.206843  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:56.419584  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:56.500862  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:56.706866  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:56.707179  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:56.919916  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:57.001310  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:57.211657  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:57.212928  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:57.419890  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:57.501392  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:57.707660  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:57.708081  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:57.919118  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:58.001618  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:58.207553  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:58.207861  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:58.419905  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:58.500887  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:58.707374  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:58.709291  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:58.920326  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:59.003437  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:59.210888  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:59.211272  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:59.419161  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:16:59.502345  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:16:59.705331  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:16:59.707269  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:16:59.919754  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:17:00.001493  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:00.214029  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:17:00.215821  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:17:00.421065  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:17:00.502142  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:00.708837  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:17:00.712202  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:17:00.920317  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:17:01.002579  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:01.208450  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:17:01.208942  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:17:01.422019  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:17:01.501741  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:01.711098  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:17:01.711540  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:17:01.919638  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:17:02.002289  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:02.213454  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:17:02.214077  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:17:02.419873  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:17:02.502249  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:02.707167  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:17:02.707196  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:17:02.920294  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:17:03.001560  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:03.209515  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:17:03.210180  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:17:03.419304  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:17:03.501383  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:03.706134  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:17:03.706302  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:17:03.921517  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:17:04.002856  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:04.205639  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:17:04.207214  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:17:04.419704  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:17:04.501580  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:04.711630  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:17:04.713406  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:17:04.920016  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:17:05.001292  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:05.206672  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:17:05.206747  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:17:05.425850  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:17:05.526652  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:05.705983  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:17:05.706714  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:17:05.919989  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:17:06.001093  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:06.205539  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:17:06.206504  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:17:06.419470  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:17:06.501546  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:06.707081  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:17:06.707699  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:17:06.920168  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:17:07.002802  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:07.206206  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:17:07.206387  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:17:07.419397  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:17:07.501943  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:07.706281  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:17:07.706876  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:17:07.919728  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:17:08.000953  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:08.217644  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:17:08.218521  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:17:08.421772  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:17:08.501494  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:08.705680  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:17:08.706937  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:17:08.919890  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:17:09.001492  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:09.214092  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:17:09.215700  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:17:09.419790  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:17:09.503041  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:09.707471  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:17:09.707737  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 21:17:09.920613  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:17:10.024913  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:10.207254  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:17:10.207331  278950 kapi.go:107] duration metric: took 1m30.505612317s to wait for kubernetes.io/minikube-addons=registry ...
	I0214 21:17:10.419538  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:17:10.500822  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:10.707618  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:17:10.920085  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:17:11.001530  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:11.205967  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:17:11.420638  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:17:11.502336  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:11.706132  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:17:11.919202  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:17:12.001061  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:12.224775  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:17:12.420041  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:17:12.501691  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:12.705975  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:17:12.920536  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:17:13.003508  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:13.206762  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:17:13.422342  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:17:13.504765  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:13.706912  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:17:13.936286  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:17:14.027003  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:14.206423  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:17:14.420015  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:17:14.501216  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:14.707274  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:17:14.920010  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:17:15.002668  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:15.217834  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:17:15.420752  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:17:15.501447  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:15.707270  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:17:15.919754  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:17:16.002113  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:16.208473  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:17:16.420918  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:17:16.502430  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:16.706659  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:17:16.920658  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:17:17.001434  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:17.206965  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:17:17.419845  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:17:17.501925  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:17.716623  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:17:17.921631  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:17:18.001817  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:18.206874  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:17:18.419769  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:17:18.501651  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:18.706738  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:17:18.922228  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:17:19.001939  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:19.206302  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:17:19.419648  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:17:19.500902  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:19.705747  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:17:19.920184  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:17:20.001489  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:20.207835  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:17:20.419902  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:17:20.503704  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:20.706049  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:17:20.920007  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:17:21.001723  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:21.208017  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:17:21.419577  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:17:21.501458  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:21.706347  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:17:21.919539  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:17:22.001156  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:22.206441  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:17:22.422522  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:17:22.501107  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:22.706183  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:17:22.920050  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:17:23.001056  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:23.206922  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:17:23.420543  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:17:23.501160  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:23.706475  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:17:23.919475  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:17:24.000624  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:24.206484  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:17:24.423575  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:17:24.523406  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:24.707119  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:17:24.919675  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 21:17:25.001689  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:25.206755  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:17:25.419884  278950 kapi.go:107] duration metric: took 1m41.003762674s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0214 21:17:25.423091  278950 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-794492 cluster.
	I0214 21:17:25.425957  278950 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0214 21:17:25.428756  278950 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0214 21:17:25.500844  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:25.706301  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:17:26.001225  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:26.211043  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:17:26.512056  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:26.706496  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:17:27.001168  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:27.206262  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:17:27.501681  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:27.706339  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:17:28.002372  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:28.207553  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:17:28.500800  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:28.706311  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:17:29.001150  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:29.209080  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:17:29.501779  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:29.715188  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:17:30.001670  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:30.208466  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:17:30.501916  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:30.705898  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:17:31.008022  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:31.206350  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:17:31.501087  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:31.708853  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:17:32.001577  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:32.206721  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:17:32.501964  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:32.706312  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:17:33.001830  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:33.206695  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:17:33.502529  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:33.705975  278950 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 21:17:34.001395  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:34.206193  278950 kapi.go:107] duration metric: took 1m54.503268585s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0214 21:17:34.501100  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:35.001770  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:35.507329  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:36.001091  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:36.501102  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:37.004386  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:37.502643  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:38.001530  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:38.500850  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:39.001290  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:39.501417  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:40.000660  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:40.501060  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:41.000959  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:41.502150  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:42.001309  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:42.500739  278950 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 21:17:43.001850  278950 kapi.go:107] duration metric: took 2m3.004248627s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0214 21:17:43.008042  278950 out.go:177] * Enabled addons: nvidia-device-plugin, amd-gpu-device-plugin, storage-provisioner, ingress-dns, inspektor-gadget, cloud-spanner, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I0214 21:17:43.011083  278950 addons.go:514] duration metric: took 2m10.557158049s for enable addons: enabled=[nvidia-device-plugin amd-gpu-device-plugin storage-provisioner ingress-dns inspektor-gadget cloud-spanner metrics-server yakd storage-provisioner-rancher volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I0214 21:17:43.011162  278950 start.go:246] waiting for cluster config update ...
	I0214 21:17:43.011187  278950 start.go:255] writing updated cluster config ...
	I0214 21:17:43.011528  278950 ssh_runner.go:195] Run: rm -f paused
	I0214 21:17:43.016321  278950 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0214 21:17:43.019815  278950 pod_ready.go:83] waiting for pod "coredns-668d6bf9bc-5gz8n" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 21:17:43.025937  278950 pod_ready.go:94] pod "coredns-668d6bf9bc-5gz8n" is "Ready"
	I0214 21:17:43.025970  278950 pod_ready.go:86] duration metric: took 6.125038ms for pod "coredns-668d6bf9bc-5gz8n" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 21:17:43.028285  278950 pod_ready.go:83] waiting for pod "etcd-addons-794492" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 21:17:43.034910  278950 pod_ready.go:94] pod "etcd-addons-794492" is "Ready"
	I0214 21:17:43.034949  278950 pod_ready.go:86] duration metric: took 6.636655ms for pod "etcd-addons-794492" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 21:17:43.102822  278950 pod_ready.go:83] waiting for pod "kube-apiserver-addons-794492" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 21:17:43.110467  278950 pod_ready.go:94] pod "kube-apiserver-addons-794492" is "Ready"
	I0214 21:17:43.110498  278950 pod_ready.go:86] duration metric: took 7.646278ms for pod "kube-apiserver-addons-794492" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 21:17:43.112991  278950 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-794492" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 21:17:43.420061  278950 pod_ready.go:94] pod "kube-controller-manager-addons-794492" is "Ready"
	I0214 21:17:43.420090  278950 pod_ready.go:86] duration metric: took 307.070569ms for pod "kube-controller-manager-addons-794492" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 21:17:43.619967  278950 pod_ready.go:83] waiting for pod "kube-proxy-xqxb9" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 21:17:44.020882  278950 pod_ready.go:94] pod "kube-proxy-xqxb9" is "Ready"
	I0214 21:17:44.020911  278950 pod_ready.go:86] duration metric: took 400.916222ms for pod "kube-proxy-xqxb9" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 21:17:44.219935  278950 pod_ready.go:83] waiting for pod "kube-scheduler-addons-794492" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 21:17:44.620243  278950 pod_ready.go:94] pod "kube-scheduler-addons-794492" is "Ready"
	I0214 21:17:44.620269  278950 pod_ready.go:86] duration metric: took 400.308183ms for pod "kube-scheduler-addons-794492" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 21:17:44.620281  278950 pod_ready.go:40] duration metric: took 1.603921508s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0214 21:17:45.047820  278950 start.go:607] kubectl: 1.32.2, cluster: 1.32.1 (minor skew: 0)
	I0214 21:17:45.053280  278950 out.go:177] * Done! kubectl is now configured to use "addons-794492" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Feb 14 21:20:28 addons-794492 crio[975]: time="2025-02-14 21:20:28.944481841Z" level=info msg="Removed pod sandbox: 2c452c55ca19bb7333c08b2697ade04faf5b92d94a6224d15403fee301e4d194" id=f4e311d3-73a1-4ea3-ab3d-47e07eef3932 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Feb 14 21:21:57 addons-794492 crio[975]: time="2025-02-14 21:21:57.958435337Z" level=info msg="Running pod sandbox: default/hello-world-app-7d9564db4-tcfh5/POD" id=0a991682-abf9-464c-a92b-7940f6ad89f6 name=/runtime.v1.RuntimeService/RunPodSandbox
	Feb 14 21:21:57 addons-794492 crio[975]: time="2025-02-14 21:21:57.958494740Z" level=warning msg="Allowed annotations are specified for workload []"
	Feb 14 21:21:57 addons-794492 crio[975]: time="2025-02-14 21:21:57.995223712Z" level=info msg="Got pod network &{Name:hello-world-app-7d9564db4-tcfh5 Namespace:default ID:8f84e01bd647c1a13f95f35d36dbe800db975c3495688eebbff1f355500aa6ae UID:d4c7d46f-9f5e-4e08-a401-1d0a7bc46424 NetNS:/var/run/netns/a1af5f77-07c5-4f35-b842-39287b599f9e Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Feb 14 21:21:57 addons-794492 crio[975]: time="2025-02-14 21:21:57.995268511Z" level=info msg="Adding pod default_hello-world-app-7d9564db4-tcfh5 to CNI network \"kindnet\" (type=ptp)"
	Feb 14 21:21:58 addons-794492 crio[975]: time="2025-02-14 21:21:58.009512885Z" level=info msg="Got pod network &{Name:hello-world-app-7d9564db4-tcfh5 Namespace:default ID:8f84e01bd647c1a13f95f35d36dbe800db975c3495688eebbff1f355500aa6ae UID:d4c7d46f-9f5e-4e08-a401-1d0a7bc46424 NetNS:/var/run/netns/a1af5f77-07c5-4f35-b842-39287b599f9e Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Feb 14 21:21:58 addons-794492 crio[975]: time="2025-02-14 21:21:58.009690227Z" level=info msg="Checking pod default_hello-world-app-7d9564db4-tcfh5 for CNI network kindnet (type=ptp)"
	Feb 14 21:21:58 addons-794492 crio[975]: time="2025-02-14 21:21:58.017287610Z" level=info msg="Ran pod sandbox 8f84e01bd647c1a13f95f35d36dbe800db975c3495688eebbff1f355500aa6ae with infra container: default/hello-world-app-7d9564db4-tcfh5/POD" id=0a991682-abf9-464c-a92b-7940f6ad89f6 name=/runtime.v1.RuntimeService/RunPodSandbox
	Feb 14 21:21:58 addons-794492 crio[975]: time="2025-02-14 21:21:58.019903272Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=5c70393d-f758-4eb6-ba3b-bc411b861fdf name=/runtime.v1.ImageService/ImageStatus
	Feb 14 21:21:58 addons-794492 crio[975]: time="2025-02-14 21:21:58.020142100Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=5c70393d-f758-4eb6-ba3b-bc411b861fdf name=/runtime.v1.ImageService/ImageStatus
	Feb 14 21:21:58 addons-794492 crio[975]: time="2025-02-14 21:21:58.023893035Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=1ec6c373-0fd9-426b-bfb3-fd6835df70f3 name=/runtime.v1.ImageService/PullImage
	Feb 14 21:21:58 addons-794492 crio[975]: time="2025-02-14 21:21:58.026272600Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Feb 14 21:21:58 addons-794492 crio[975]: time="2025-02-14 21:21:58.262557190Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Feb 14 21:21:59 addons-794492 crio[975]: time="2025-02-14 21:21:59.095309938Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6" id=1ec6c373-0fd9-426b-bfb3-fd6835df70f3 name=/runtime.v1.ImageService/PullImage
	Feb 14 21:21:59 addons-794492 crio[975]: time="2025-02-14 21:21:59.095938730Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=e3b8c7fc-c6f9-4688-af26-bc46b2e5a3b8 name=/runtime.v1.ImageService/ImageStatus
	Feb 14 21:21:59 addons-794492 crio[975]: time="2025-02-14 21:21:59.096613847Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b],Size_:4789170,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=e3b8c7fc-c6f9-4688-af26-bc46b2e5a3b8 name=/runtime.v1.ImageService/ImageStatus
	Feb 14 21:21:59 addons-794492 crio[975]: time="2025-02-14 21:21:59.097575923Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=e8ff15f4-b6ca-47b4-8217-df8465232b15 name=/runtime.v1.ImageService/ImageStatus
	Feb 14 21:21:59 addons-794492 crio[975]: time="2025-02-14 21:21:59.098179657Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b],Size_:4789170,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=e8ff15f4-b6ca-47b4-8217-df8465232b15 name=/runtime.v1.ImageService/ImageStatus
	Feb 14 21:21:59 addons-794492 crio[975]: time="2025-02-14 21:21:59.098916220Z" level=info msg="Creating container: default/hello-world-app-7d9564db4-tcfh5/hello-world-app" id=062a00cd-aaf2-4a44-b899-10cf6e9b1a28 name=/runtime.v1.RuntimeService/CreateContainer
	Feb 14 21:21:59 addons-794492 crio[975]: time="2025-02-14 21:21:59.098997063Z" level=warning msg="Allowed annotations are specified for workload []"
	Feb 14 21:21:59 addons-794492 crio[975]: time="2025-02-14 21:21:59.124904029Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/341136514e247c4123a63607308e712d64c770b960f040075fe1de4ce8b49987/merged/etc/passwd: no such file or directory"
	Feb 14 21:21:59 addons-794492 crio[975]: time="2025-02-14 21:21:59.125128622Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/341136514e247c4123a63607308e712d64c770b960f040075fe1de4ce8b49987/merged/etc/group: no such file or directory"
	Feb 14 21:21:59 addons-794492 crio[975]: time="2025-02-14 21:21:59.183318824Z" level=info msg="Created container 9615611b60fdf88a639d09c43977af1230dccba2ebac411d45b462e06ee39864: default/hello-world-app-7d9564db4-tcfh5/hello-world-app" id=062a00cd-aaf2-4a44-b899-10cf6e9b1a28 name=/runtime.v1.RuntimeService/CreateContainer
	Feb 14 21:21:59 addons-794492 crio[975]: time="2025-02-14 21:21:59.186172142Z" level=info msg="Starting container: 9615611b60fdf88a639d09c43977af1230dccba2ebac411d45b462e06ee39864" id=5bcc92d1-ee1b-41ed-ad4c-1c9f746133d2 name=/runtime.v1.RuntimeService/StartContainer
	Feb 14 21:21:59 addons-794492 crio[975]: time="2025-02-14 21:21:59.197008194Z" level=info msg="Started container" PID=8731 containerID=9615611b60fdf88a639d09c43977af1230dccba2ebac411d45b462e06ee39864 description=default/hello-world-app-7d9564db4-tcfh5/hello-world-app id=5bcc92d1-ee1b-41ed-ad4c-1c9f746133d2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8f84e01bd647c1a13f95f35d36dbe800db975c3495688eebbff1f355500aa6ae
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD
	9615611b60fdf       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        Less than a second ago   Running             hello-world-app           0                   8f84e01bd647c       hello-world-app-7d9564db4-tcfh5
	3b686edb68c11       docker.io/library/nginx@sha256:b471bb609adc83f73c2d95148cf1bd683408739a3c09c0afc666ea2af0037aef                              2 minutes ago            Running             nginx                     0                   a2e5c5d0b0c30       nginx
	45aa2da6769c4       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          4 minutes ago            Running             busybox                   0                   2fae33fed7c2b       busybox
	6807a403fa896       registry.k8s.io/ingress-nginx/controller@sha256:787a5408fa511266888b2e765f9666bee67d9bf2518a6b7cfd4ab6cc01c22eee             4 minutes ago            Running             controller                0                   a2f47df036276       ingress-nginx-controller-56d7c84fd4-dvrb8
	11cc6e599d3cb       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:0550b75a965592f1dde3fbeaa98f67a1e10c5a086bcd69a29054cc4edcb56771   4 minutes ago            Exited              patch                     0                   00d32e7129db3       ingress-nginx-admission-patch-jm4tc
	0e24195a2dc23       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:0550b75a965592f1dde3fbeaa98f67a1e10c5a086bcd69a29054cc4edcb56771   4 minutes ago            Exited              create                    0                   fe842a67e6e51       ingress-nginx-admission-create-s4gbk
	61cf5ac1f6e41       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c             5 minutes ago            Running             minikube-ingress-dns      0                   c6abde0a3c1b7       kube-ingress-dns-minikube
	be4a241348182       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             5 minutes ago            Running             storage-provisioner       0                   d74c44446e2f8       storage-provisioner
	bc773715c4326       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4                                                             5 minutes ago            Running             coredns                   0                   932e24df5fc5a       coredns-668d6bf9bc-5gz8n
	2c72c9fbe4eb8       docker.io/kindest/kindnetd@sha256:564b9fd29e72542e4baa14b382d4d9ee22132141caa6b9803b71faf9a4a799be                           6 minutes ago            Running             kindnet-cni               0                   ceacec17564c8       kindnet-kmmf5
	93793661459a0       e124fbed851d756107a6153db4dc52269a2fd34af3cc46f00a2ef113f868aab0                                                             6 minutes ago            Running             kube-proxy                0                   028695ac1659e       kube-proxy-xqxb9
	215b2a9ca9851       265c2dedf28ab9b88c7910c1643e210ad62483867f2bab88f56919a6e49a0d19                                                             6 minutes ago            Running             kube-apiserver            0                   c3307dded2374       kube-apiserver-addons-794492
	2fefd25400594       7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82                                                             6 minutes ago            Running             etcd                      0                   39a5ebf598fb0       etcd-addons-794492
	22fee50d04b14       ddb38cac617cb18802e09e448db4b3aa70e9e469b02defa76e6de7192847a71c                                                             6 minutes ago            Running             kube-scheduler            0                   0cade0b549eee       kube-scheduler-addons-794492
	27c3d314c14fd       2933761aa7adae93679cdde1c0bf457bd4dc4b53f95fc066a4c50aa9c375ea13                                                             6 minutes ago            Running             kube-controller-manager   0                   8338e803cec4d       kube-controller-manager-addons-794492
	
	
	==> coredns [bc773715c43265a3161e5e13857ead8aaff1ed497ddb2f44f4d2dbf6c78ab970] <==
	[INFO] 10.244.0.13:45443 - 59074 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002895676s
	[INFO] 10.244.0.13:45443 - 13220 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000136973s
	[INFO] 10.244.0.13:45443 - 34752 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000216027s
	[INFO] 10.244.0.13:34272 - 13165 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000163696s
	[INFO] 10.244.0.13:34272 - 12919 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000284915s
	[INFO] 10.244.0.13:45496 - 54060 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000107337s
	[INFO] 10.244.0.13:45496 - 53855 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000171696s
	[INFO] 10.244.0.13:45528 - 2108 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000151421s
	[INFO] 10.244.0.13:45528 - 2553 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000203597s
	[INFO] 10.244.0.13:37287 - 8660 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002516847s
	[INFO] 10.244.0.13:37287 - 8207 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002600102s
	[INFO] 10.244.0.13:37407 - 17525 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000217471s
	[INFO] 10.244.0.13:37407 - 17912 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000447906s
	[INFO] 10.244.0.20:35047 - 30493 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000192332s
	[INFO] 10.244.0.20:45889 - 44202 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000147302s
	[INFO] 10.244.0.20:34967 - 58430 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000135931s
	[INFO] 10.244.0.20:39063 - 21576 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000111505s
	[INFO] 10.244.0.20:38779 - 56168 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000124132s
	[INFO] 10.244.0.20:54446 - 58618 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000093192s
	[INFO] 10.244.0.20:34386 - 4179 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.004203916s
	[INFO] 10.244.0.20:49111 - 22638 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001802176s
	[INFO] 10.244.0.20:44872 - 60452 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003901237s
	[INFO] 10.244.0.20:35562 - 37619 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.00420312s
	[INFO] 10.244.0.24:52903 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000232552s
	[INFO] 10.244.0.24:52232 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000180032s
	
	
	==> describe nodes <==
	Name:               addons-794492
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-794492
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fbfc2b66c4da8e6c2b2b8466356b2fbd8038ee5a
	                    minikube.k8s.io/name=addons-794492
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_02_14T21_15_28_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-794492
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 14 Feb 2025 21:15:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-794492
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 14 Feb 2025 21:21:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 14 Feb 2025 21:20:03 +0000   Fri, 14 Feb 2025 21:15:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 14 Feb 2025 21:20:03 +0000   Fri, 14 Feb 2025 21:15:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 14 Feb 2025 21:20:03 +0000   Fri, 14 Feb 2025 21:15:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 14 Feb 2025 21:20:03 +0000   Fri, 14 Feb 2025 21:16:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-794492
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 7889802ab6d04f8b9e320676740f856a
	  System UUID:                21dd13ba-4e6d-4d5f-969a-80e657c31326
	  Boot ID:                    e73e80e8-f4f5-4b6f-baaf-c79d4b748ea0
	  Kernel Version:             5.15.0-1077-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m14s
	  default                     hello-world-app-7d9564db4-tcfh5              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  ingress-nginx               ingress-nginx-controller-56d7c84fd4-dvrb8    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         6m20s
	  kube-system                 coredns-668d6bf9bc-5gz8n                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     6m27s
	  kube-system                 etcd-addons-794492                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         6m31s
	  kube-system                 kindnet-kmmf5                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      6m27s
	  kube-system                 kube-apiserver-addons-794492                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m31s
	  kube-system                 kube-controller-manager-addons-794492        200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m33s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m22s
	  kube-system                 kube-proxy-xqxb9                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m27s
	  kube-system                 kube-scheduler-addons-794492                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m31s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             310Mi (3%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 6m20s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  6m39s (x8 over 6m39s)  kubelet          Node addons-794492 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m39s (x8 over 6m39s)  kubelet          Node addons-794492 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m39s (x8 over 6m39s)  kubelet          Node addons-794492 status is now: NodeHasSufficientPID
	  Normal   Starting                 6m32s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m32s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  6m32s                  kubelet          Node addons-794492 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m32s                  kubelet          Node addons-794492 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m32s                  kubelet          Node addons-794492 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6m28s                  node-controller  Node addons-794492 event: Registered Node addons-794492 in Controller
	  Normal   NodeReady                5m40s                  kubelet          Node addons-794492 status is now: NodeReady
	
	
	==> dmesg <==
	[Feb14 19:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.013894] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.498066] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032966] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.753101] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +4.898997] kauditd_printk_skb: 36 callbacks suppressed
	[Feb14 20:18] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Feb14 20:49] overlayfs: '/var/lib/containers/storage/overlay/l/ZLTOCNGE2IGM6DT7VP2QP7OV3M' not a directory
	[  +1.296116] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	
	
	==> etcd [2fefd254005941e1f8275d0fba0d8e4d22a4f6e5fec44b11a8c3f03e704012ed] <==
	{"level":"info","ts":"2025-02-14T21:15:22.643396Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-14T21:15:22.643775Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-14T21:15:22.644352Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-02-14T21:15:22.645541Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2025-02-14T21:15:22.644464Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2025-02-14T21:15:22.645936Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-02-14T21:15:22.645991Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-02-14T21:15:22.644849Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-02-14T21:15:22.646703Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-02-14T21:15:22.651279Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-02-14T21:15:22.651365Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-02-14T21:15:33.663469Z","caller":"traceutil/trace.go:171","msg":"trace[605453030] transaction","detail":"{read_only:false; response_revision:361; number_of_response:1; }","duration":"111.737757ms","start":"2025-02-14T21:15:33.551712Z","end":"2025-02-14T21:15:33.663450Z","steps":["trace[605453030] 'process raft request'  (duration: 87.718388ms)","trace[605453030] 'compare'  (duration: 23.86282ms)"],"step_count":2}
	{"level":"warn","ts":"2025-02-14T21:15:36.827729Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.081719ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/metrics-server\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-02-14T21:15:36.830102Z","caller":"traceutil/trace.go:171","msg":"trace[826146617] range","detail":"{range_begin:/registry/deployments/kube-system/metrics-server; range_end:; response_count:0; response_revision:393; }","duration":"102.497118ms","start":"2025-02-14T21:15:36.727588Z","end":"2025-02-14T21:15:36.830085Z","steps":["trace[826146617] 'agreement among raft nodes before linearized reading'  (duration: 100.059901ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-14T21:15:36.830473Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.908587ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" limit:1 ","response":"range_response_count:1 size:4034"}
	{"level":"info","ts":"2025-02-14T21:15:36.830556Z","caller":"traceutil/trace.go:171","msg":"trace[1371515087] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:393; }","duration":"103.003214ms","start":"2025-02-14T21:15:36.727543Z","end":"2025-02-14T21:15:36.830547Z","steps":["trace[1371515087] 'agreement among raft nodes before linearized reading'  (duration: 86.750942ms)","trace[1371515087] 'get authentication metadata'  (duration: 16.135959ms)"],"step_count":2}
	{"level":"info","ts":"2025-02-14T21:15:36.950605Z","caller":"traceutil/trace.go:171","msg":"trace[680900857] transaction","detail":"{read_only:false; response_revision:394; number_of_response:1; }","duration":"115.275066ms","start":"2025-02-14T21:15:36.835308Z","end":"2025-02-14T21:15:36.950583Z","steps":["trace[680900857] 'process raft request'  (duration: 32.295311ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-14T21:15:36.950924Z","caller":"traceutil/trace.go:171","msg":"trace[2032334109] transaction","detail":"{read_only:false; response_revision:395; number_of_response:1; }","duration":"115.52977ms","start":"2025-02-14T21:15:36.835382Z","end":"2025-02-14T21:15:36.950912Z","steps":["trace[2032334109] 'process raft request'  (duration: 32.323462ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-14T21:15:37.692747Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"149.450572ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/local-path-storage\" limit:1 ","response":"range_response_count:1 size:621"}
	{"level":"info","ts":"2025-02-14T21:15:37.700163Z","caller":"traceutil/trace.go:171","msg":"trace[1922538046] transaction","detail":"{read_only:false; response_revision:434; number_of_response:1; }","duration":"156.585417ms","start":"2025-02-14T21:15:37.543554Z","end":"2025-02-14T21:15:37.700140Z","steps":["trace[1922538046] 'process raft request'  (duration: 74.695478ms)","trace[1922538046] 'compare'  (duration: 74.330778ms)"],"step_count":2}
	{"level":"info","ts":"2025-02-14T21:15:37.700203Z","caller":"traceutil/trace.go:171","msg":"trace[1537035485] linearizableReadLoop","detail":"{readStateIndex:445; appliedIndex:444; }","duration":"117.924774ms","start":"2025-02-14T21:15:37.582272Z","end":"2025-02-14T21:15:37.700197Z","steps":["trace[1537035485] 'read index received'  (duration: 31.769µs)","trace[1537035485] 'applied index is now lower than readState.Index'  (duration: 117.892086ms)"],"step_count":2}
	{"level":"warn","ts":"2025-02-14T21:15:37.700257Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"117.967751ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/yakd-dashboard/yakd-dashboard\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-02-14T21:15:37.707321Z","caller":"traceutil/trace.go:171","msg":"trace[2121025108] range","detail":"{range_begin:/registry/serviceaccounts/yakd-dashboard/yakd-dashboard; range_end:; response_count:0; response_revision:443; }","duration":"125.03214ms","start":"2025-02-14T21:15:37.582268Z","end":"2025-02-14T21:15:37.707300Z","steps":["trace[2121025108] 'agreement among raft nodes before linearized reading'  (duration: 117.951349ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-14T21:15:37.700285Z","caller":"traceutil/trace.go:171","msg":"trace[1179162475] transaction","detail":"{read_only:false; response_revision:435; number_of_response:1; }","duration":"117.922165ms","start":"2025-02-14T21:15:37.582357Z","end":"2025-02-14T21:15:37.700279Z","steps":["trace[1179162475] 'process raft request'  (duration: 116.992525ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-14T21:15:37.700366Z","caller":"traceutil/trace.go:171","msg":"trace[1386704562] range","detail":"{range_begin:/registry/namespaces/local-path-storage; range_end:; response_count:1; response_revision:433; }","duration":"157.073633ms","start":"2025-02-14T21:15:37.543268Z","end":"2025-02-14T21:15:37.700342Z","steps":["trace[1386704562] 'range keys from in-memory index tree'  (duration: 149.361549ms)"],"step_count":1}
	
	
	==> kernel <==
	 21:21:59 up  2:04,  0 users,  load average: 0.39, 1.25, 2.06
	Linux addons-794492 5.15.0-1077-aws #84~20.04.1-Ubuntu SMP Mon Jan 20 22:14:27 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [2c72c9fbe4eb81a5ac9003af3b721df23d2b188dfa7b03ea802cbd73363d27fe] <==
	I0214 21:19:58.976602       1 main.go:301] handling current node
	I0214 21:20:08.973214       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0214 21:20:08.973248       1 main.go:301] handling current node
	I0214 21:20:18.977649       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0214 21:20:18.977683       1 main.go:301] handling current node
	I0214 21:20:28.979120       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0214 21:20:28.979153       1 main.go:301] handling current node
	I0214 21:20:38.971339       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0214 21:20:38.971480       1 main.go:301] handling current node
	I0214 21:20:48.977914       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0214 21:20:48.977948       1 main.go:301] handling current node
	I0214 21:20:58.978087       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0214 21:20:58.978119       1 main.go:301] handling current node
	I0214 21:21:08.978619       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0214 21:21:08.978654       1 main.go:301] handling current node
	I0214 21:21:18.976964       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0214 21:21:18.977003       1 main.go:301] handling current node
	I0214 21:21:28.978392       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0214 21:21:28.978428       1 main.go:301] handling current node
	I0214 21:21:38.970504       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0214 21:21:38.970543       1 main.go:301] handling current node
	I0214 21:21:48.976943       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0214 21:21:48.977074       1 main.go:301] handling current node
	I0214 21:21:58.971108       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0214 21:21:58.971140       1 main.go:301] handling current node
	
	
	==> kube-apiserver [215b2a9ca98513c0249b5551566fb76e20a16322e7aeea1a49bb78a4db4888d9] <==
	I0214 21:18:06.021974       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.110.232.91"}
	E0214 21:18:41.627967       1 authentication.go:74] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0214 21:18:41.638938       1 authentication.go:74] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0214 21:18:41.653166       1 authentication.go:74] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0214 21:18:56.656135       1 authentication.go:74] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0214 21:19:08.660917       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0214 21:19:30.205313       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0214 21:19:30.207077       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0214 21:19:30.257845       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0214 21:19:30.257892       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0214 21:19:30.320529       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0214 21:19:30.324441       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0214 21:19:30.398219       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0214 21:19:30.398418       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0214 21:19:30.454568       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0214 21:19:30.455016       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0214 21:19:30.934880       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0214 21:19:31.398574       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0214 21:19:31.456968       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0214 21:19:31.626908       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0214 21:19:31.969831       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0214 21:19:36.511280       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0214 21:19:36.752567       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.48.216"}
	I0214 21:19:45.157982       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0214 21:21:57.881551       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.102.141.179"}
	
	
	==> kube-controller-manager [27c3d314c14fd1278277e27adb818d82e67580a77adc949591e77ab7bdee9a01] <==
	E0214 21:20:48.458500       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotcontents"
	W0214 21:20:48.459503       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0214 21:20:48.459596       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0214 21:21:18.807713       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0214 21:21:18.808813       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotclasses"
	W0214 21:21:18.809862       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0214 21:21:18.809902       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0214 21:21:20.165653       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0214 21:21:20.166688       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshots"
	W0214 21:21:20.167697       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0214 21:21:20.167735       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0214 21:21:26.571946       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0214 21:21:26.572989       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotcontents"
	W0214 21:21:26.573910       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0214 21:21:26.573959       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0214 21:21:34.784797       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0214 21:21:34.786334       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
	W0214 21:21:34.787794       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0214 21:21:34.787835       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0214 21:21:57.655205       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="48.511329ms"
	I0214 21:21:57.667505       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="12.149971ms"
	I0214 21:21:57.667785       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="75.1µs"
	I0214 21:21:57.674835       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="59.166µs"
	I0214 21:21:59.577975       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="6.598484ms"
	I0214 21:21:59.578846       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="38.186µs"
	
	
	==> kube-proxy [93793661459a0dbf9f787f6b5f18e8416c17ee2e6c80c40f6b15de95dcbc7614] <==
	I0214 21:15:39.003956       1 server_linux.go:66] "Using iptables proxy"
	I0214 21:15:39.281065       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0214 21:15:39.281142       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0214 21:15:39.329112       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0214 21:15:39.329344       1 server_linux.go:170] "Using iptables Proxier"
	I0214 21:15:39.332229       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0214 21:15:39.333671       1 server.go:497] "Version info" version="v1.32.1"
	I0214 21:15:39.333752       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0214 21:15:39.347747       1 config.go:199] "Starting service config controller"
	I0214 21:15:39.347854       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0214 21:15:39.347910       1 config.go:105] "Starting endpoint slice config controller"
	I0214 21:15:39.347939       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0214 21:15:39.348543       1 config.go:329] "Starting node config controller"
	I0214 21:15:39.348607       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0214 21:15:39.448831       1 shared_informer.go:320] Caches are synced for node config
	I0214 21:15:39.551724       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0214 21:15:39.551845       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [22fee50d04b14f0e1296bd064c2820d82205aa9d94d39ea3bfda5d8e230c893f] <==
	W0214 21:15:24.996253       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0214 21:15:24.996316       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0214 21:15:24.996359       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0214 21:15:24.996391       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0214 21:15:24.996445       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0214 21:15:24.996493       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0214 21:15:24.996516       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0214 21:15:24.996548       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0214 21:15:24.996616       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	E0214 21:15:24.996330       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0214 21:15:25.837853       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0214 21:15:25.837892       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0214 21:15:25.878561       1 reflector.go:569] runtime/asm_arm64.s:1223: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0214 21:15:25.878604       1 reflector.go:166] "Unhandled Error" err="runtime/asm_arm64.s:1223: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0214 21:15:25.972226       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0214 21:15:25.972385       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0214 21:15:25.979793       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0214 21:15:25.979835       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0214 21:15:26.111008       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0214 21:15:26.111204       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0214 21:15:26.132979       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0214 21:15:26.133040       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0214 21:15:26.192569       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0214 21:15:26.192610       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0214 21:15:27.787220       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 14 21:21:27 addons-794492 kubelet[1535]: E0214 21:21:27.849949    1535 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/7c64564a8299ea0f86976be6b5ba6a8fada1b7e064a13dc3f61919054323e69d/diff" to get inode usage: stat /var/lib/containers/storage/overlay/7c64564a8299ea0f86976be6b5ba6a8fada1b7e064a13dc3f61919054323e69d/diff: no such file or directory, extraDiskErr: <nil>
	Feb 14 21:21:27 addons-794492 kubelet[1535]: E0214 21:21:27.854348    1535 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/5dbd119a8c5d70dddd215b715798281a6741196223663b8eaee4d22d0a64235a/diff" to get inode usage: stat /var/lib/containers/storage/overlay/5dbd119a8c5d70dddd215b715798281a6741196223663b8eaee4d22d0a64235a/diff: no such file or directory, extraDiskErr: <nil>
	Feb 14 21:21:27 addons-794492 kubelet[1535]: E0214 21:21:27.854484    1535 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/7c64564a8299ea0f86976be6b5ba6a8fada1b7e064a13dc3f61919054323e69d/diff" to get inode usage: stat /var/lib/containers/storage/overlay/7c64564a8299ea0f86976be6b5ba6a8fada1b7e064a13dc3f61919054323e69d/diff: no such file or directory, extraDiskErr: <nil>
	Feb 14 21:21:27 addons-794492 kubelet[1535]: E0214 21:21:27.857661    1535 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/e36d5bddf7af8477abc49faf03cef79cc194170465fb2a3f5c041965b4e38b00/diff" to get inode usage: stat /var/lib/containers/storage/overlay/e36d5bddf7af8477abc49faf03cef79cc194170465fb2a3f5c041965b4e38b00/diff: no such file or directory, extraDiskErr: <nil>
	Feb 14 21:21:27 addons-794492 kubelet[1535]: E0214 21:21:27.857713    1535 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/17f4caadbacc91524a6e5aa07a3a11a79e3386cd614b03c2381330f16b3d497a/diff" to get inode usage: stat /var/lib/containers/storage/overlay/17f4caadbacc91524a6e5aa07a3a11a79e3386cd614b03c2381330f16b3d497a/diff: no such file or directory, extraDiskErr: <nil>
	Feb 14 21:21:27 addons-794492 kubelet[1535]: E0214 21:21:27.858865    1535 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/b0997881db2d2072693f9f1a8d4263b5b11e442004f9b21d4244aee5e481706b/diff" to get inode usage: stat /var/lib/containers/storage/overlay/b0997881db2d2072693f9f1a8d4263b5b11e442004f9b21d4244aee5e481706b/diff: no such file or directory, extraDiskErr: <nil>
	Feb 14 21:21:27 addons-794492 kubelet[1535]: E0214 21:21:27.861070    1535 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/5dbd119a8c5d70dddd215b715798281a6741196223663b8eaee4d22d0a64235a/diff" to get inode usage: stat /var/lib/containers/storage/overlay/5dbd119a8c5d70dddd215b715798281a6741196223663b8eaee4d22d0a64235a/diff: no such file or directory, extraDiskErr: <nil>
	Feb 14 21:21:27 addons-794492 kubelet[1535]: E0214 21:21:27.861095    1535 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/d9b171d818f177cbf6d67ef9b77af0d2b766303dc03d815cfe1a899d22d1fb61/diff" to get inode usage: stat /var/lib/containers/storage/overlay/d9b171d818f177cbf6d67ef9b77af0d2b766303dc03d815cfe1a899d22d1fb61/diff: no such file or directory, extraDiskErr: <nil>
	Feb 14 21:21:27 addons-794492 kubelet[1535]: E0214 21:21:27.864324    1535 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/5ade6a7bf1a87a9ae94ba1741454ae444a5e36788b63af978915ace36c40a593/diff" to get inode usage: stat /var/lib/containers/storage/overlay/5ade6a7bf1a87a9ae94ba1741454ae444a5e36788b63af978915ace36c40a593/diff: no such file or directory, extraDiskErr: <nil>
	Feb 14 21:21:27 addons-794492 kubelet[1535]: E0214 21:21:27.873882    1535 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/5ade6a7bf1a87a9ae94ba1741454ae444a5e36788b63af978915ace36c40a593/diff" to get inode usage: stat /var/lib/containers/storage/overlay/5ade6a7bf1a87a9ae94ba1741454ae444a5e36788b63af978915ace36c40a593/diff: no such file or directory, extraDiskErr: <nil>
	Feb 14 21:21:27 addons-794492 kubelet[1535]: E0214 21:21:27.939864    1535 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/a73ab2ebc470c2d53201a0c9e40affb66defaac5a6d98285c69ac373442e0a3f/diff" to get inode usage: stat /var/lib/containers/storage/overlay/a73ab2ebc470c2d53201a0c9e40affb66defaac5a6d98285c69ac373442e0a3f/diff: no such file or directory, extraDiskErr: <nil>
	Feb 14 21:21:28 addons-794492 kubelet[1535]: E0214 21:21:28.004318    1535 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739568088003974736,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:605643,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 14 21:21:28 addons-794492 kubelet[1535]: E0214 21:21:28.004359    1535 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739568088003974736,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:605643,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 14 21:21:28 addons-794492 kubelet[1535]: E0214 21:21:28.240791    1535 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/523d3d8a1b6eef65d66f3621daf65de53a26947294e59ae246df4dc0100532d0/diff" to get inode usage: stat /var/lib/containers/storage/overlay/523d3d8a1b6eef65d66f3621daf65de53a26947294e59ae246df4dc0100532d0/diff: no such file or directory, extraDiskErr: <nil>
	Feb 14 21:21:34 addons-794492 kubelet[1535]: I0214 21:21:34.710419    1535 kubelet_pods.go:1021] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Feb 14 21:21:38 addons-794492 kubelet[1535]: E0214 21:21:38.009468    1535 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739568098009088889,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:605643,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 14 21:21:38 addons-794492 kubelet[1535]: E0214 21:21:38.009512    1535 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739568098009088889,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:605643,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 14 21:21:48 addons-794492 kubelet[1535]: E0214 21:21:48.012981    1535 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739568108012633030,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:605643,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 14 21:21:48 addons-794492 kubelet[1535]: E0214 21:21:48.013026    1535 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739568108012633030,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:605643,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 14 21:21:57 addons-794492 kubelet[1535]: E0214 21:21:57.205994    1535 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/cded84d4df06847bed3b57151ad58c1930e4584598928bb5e0d8fe9747af99ff/diff" to get inode usage: stat /var/lib/containers/storage/overlay/cded84d4df06847bed3b57151ad58c1930e4584598928bb5e0d8fe9747af99ff/diff: no such file or directory, extraDiskErr: <nil>
	Feb 14 21:21:57 addons-794492 kubelet[1535]: I0214 21:21:57.656537    1535 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx" podStartSLOduration=139.299109723 podStartE2EDuration="2m21.656517309s" podCreationTimestamp="2025-02-14 21:19:36 +0000 UTC" firstStartedPulling="2025-02-14 21:19:37.041386198 +0000 UTC m=+249.446593018" lastFinishedPulling="2025-02-14 21:19:39.398793784 +0000 UTC m=+251.804000604" observedRunningTime="2025-02-14 21:19:40.302210392 +0000 UTC m=+252.707417212" watchObservedRunningTime="2025-02-14 21:21:57.656517309 +0000 UTC m=+390.061724137"
	Feb 14 21:21:57 addons-794492 kubelet[1535]: I0214 21:21:57.730028    1535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q58cb\" (UniqueName: \"kubernetes.io/projected/d4c7d46f-9f5e-4e08-a401-1d0a7bc46424-kube-api-access-q58cb\") pod \"hello-world-app-7d9564db4-tcfh5\" (UID: \"d4c7d46f-9f5e-4e08-a401-1d0a7bc46424\") " pod="default/hello-world-app-7d9564db4-tcfh5"
	Feb 14 21:21:58 addons-794492 kubelet[1535]: E0214 21:21:58.017295    1535 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739568118017028737,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:605643,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 14 21:21:58 addons-794492 kubelet[1535]: E0214 21:21:58.017328    1535 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739568118017028737,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:605643,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 14 21:21:59 addons-794492 kubelet[1535]: I0214 21:21:59.569317    1535 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-7d9564db4-tcfh5" podStartSLOduration=1.495908187 podStartE2EDuration="2.569298577s" podCreationTimestamp="2025-02-14 21:21:57 +0000 UTC" firstStartedPulling="2025-02-14 21:21:58.023439681 +0000 UTC m=+390.428646501" lastFinishedPulling="2025-02-14 21:21:59.096830071 +0000 UTC m=+391.502036891" observedRunningTime="2025-02-14 21:21:59.569121646 +0000 UTC m=+391.974328483" watchObservedRunningTime="2025-02-14 21:21:59.569298577 +0000 UTC m=+391.974505397"
	
	
	==> storage-provisioner [be4a2413481827ba896f0fdbf28de127f36ab7e5f283c61a5efa79413c8e6d9f] <==
	I0214 21:16:20.384274       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0214 21:16:20.413382       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0214 21:16:20.415654       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0214 21:16:20.438132       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0214 21:16:20.438370       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-794492_4c7166e1-a513-4371-a8c9-e4253240f1c4!
	I0214 21:16:20.449404       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"428bf1f8-7af9-49af-9d28-93c36885d0fd", APIVersion:"v1", ResourceVersion:"932", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-794492_4c7166e1-a513-4371-a8c9-e4253240f1c4 became leader
	I0214 21:16:20.538831       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-794492_4c7166e1-a513-4371-a8c9-e4253240f1c4!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-794492 -n addons-794492
helpers_test.go:261: (dbg) Run:  kubectl --context addons-794492 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-s4gbk ingress-nginx-admission-patch-jm4tc
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-794492 describe pod ingress-nginx-admission-create-s4gbk ingress-nginx-admission-patch-jm4tc
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-794492 describe pod ingress-nginx-admission-create-s4gbk ingress-nginx-admission-patch-jm4tc: exit status 1 (108.603974ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-s4gbk" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-jm4tc" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-794492 describe pod ingress-nginx-admission-create-s4gbk ingress-nginx-admission-patch-jm4tc: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-794492 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-794492 addons disable ingress-dns --alsologtostderr -v=1: (1.285499567s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-794492 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-794492 addons disable ingress --alsologtostderr -v=1: (7.786459713s)
--- FAIL: TestAddons/parallel/Ingress (154.20s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (202.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [108e7316-4378-44f1-b7e7-6076b70daa8c] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004195942s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-264648 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-264648 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-264648 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-264648 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [ad635387-ba99-4d32-931a-8f4edca60c9a] Pending
helpers_test.go:344: "sp-pod" [ad635387-ba99-4d32-931a-8f4edca60c9a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [ad635387-ba99-4d32-931a-8f4edca60c9a] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.002924409s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-264648 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-264648 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-264648 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [32713a12-848d-43f5-b2ad-870e19e9bc10] Pending
helpers_test.go:344: "sp-pod" [32713a12-848d-43f5-b2ad-870e19e9bc10] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "default" "test=storage-provisioner" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
functional_test_pvc_test.go:130: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 3m0s: context deadline exceeded ****
functional_test_pvc_test.go:130: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-264648 -n functional-264648
functional_test_pvc_test.go:130: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2025-02-14 21:28:50.083878398 +0000 UTC m=+868.163003673
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-264648 describe po sp-pod -n default
functional_test_pvc_test.go:130: (dbg) kubectl --context functional-264648 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-264648/192.168.49.2
Start Time:       Fri, 14 Feb 2025 21:25:49 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:  10.244.0.7
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mztr8 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-mztr8:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  3m                    default-scheduler  Successfully assigned default/sp-pod to functional-264648
Warning  Failed     108s (x2 over 2m30s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Normal   Pulling    84s (x3 over 3m)      kubelet            Pulling image "docker.io/nginx"
Warning  Failed     40s (x3 over 2m30s)   kubelet            Error: ErrImagePull
Warning  Failed     40s                   kubelet            Failed to pull image "docker.io/nginx": loading manifest for target platform: reading manifest sha256:a90c36c019f67444c32f4d361084d4b5b853dbf8701055df7908bf42cc613cdd in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Normal   BackOff    13s (x4 over 2m30s)   kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     13s (x4 over 2m30s)   kubelet            Error: ImagePullBackOff
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-264648 logs sp-pod -n default
functional_test_pvc_test.go:130: (dbg) Non-zero exit: kubectl --context functional-264648 logs sp-pod -n default: exit status 1 (97.073165ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_pvc_test.go:130: kubectl --context functional-264648 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:131: failed waiting for pod: test=storage-provisioner within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-264648
helpers_test.go:235: (dbg) docker inspect functional-264648:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "25670fe1b4bc753554415b10d89c74ded3d489e1afed3cecb33e875fc010af84",
	        "Created": "2025-02-14T21:23:18.404313719Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 295290,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-02-14T21:23:18.554291826Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:86f383d95829214691bb905fe90945d8bf2efbbe5a717e0830a616744d143ec9",
	        "ResolvConfPath": "/var/lib/docker/containers/25670fe1b4bc753554415b10d89c74ded3d489e1afed3cecb33e875fc010af84/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/25670fe1b4bc753554415b10d89c74ded3d489e1afed3cecb33e875fc010af84/hostname",
	        "HostsPath": "/var/lib/docker/containers/25670fe1b4bc753554415b10d89c74ded3d489e1afed3cecb33e875fc010af84/hosts",
	        "LogPath": "/var/lib/docker/containers/25670fe1b4bc753554415b10d89c74ded3d489e1afed3cecb33e875fc010af84/25670fe1b4bc753554415b10d89c74ded3d489e1afed3cecb33e875fc010af84-json.log",
	        "Name": "/functional-264648",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-264648:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-264648",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/09e279d74c50e108ac72aa9c2b18e095d681e0b1c442c4efdaf16eec572fad6b-init/diff:/var/lib/docker/overlay2/98047733aa5d86fafdd36d9f264e1aa5c3c6b5243d320c9d2e042ec72038fd21/diff",
	                "MergedDir": "/var/lib/docker/overlay2/09e279d74c50e108ac72aa9c2b18e095d681e0b1c442c4efdaf16eec572fad6b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/09e279d74c50e108ac72aa9c2b18e095d681e0b1c442c4efdaf16eec572fad6b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/09e279d74c50e108ac72aa9c2b18e095d681e0b1c442c4efdaf16eec572fad6b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-264648",
	                "Source": "/var/lib/docker/volumes/functional-264648/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-264648",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-264648",
	                "name.minikube.sigs.k8s.io": "functional-264648",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a4d42eaf6191c5e9835fb22309c4fc145d0598b65371f95370bcb79f0523faf0",
	            "SandboxKey": "/var/run/docker/netns/a4d42eaf6191",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33146"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33147"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-264648": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "39e2395d9b30658ed663ecf1037524c500a119799f4dea3f4268753909eab541",
	                    "EndpointID": "f92022d8912d3b02eb5dac530d7e90b0753b82bd7cec03d2f667104890d66c7e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-264648",
	                        "25670fe1b4bc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-264648 -n functional-264648
helpers_test.go:244: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p functional-264648 logs -n 25: (1.829213101s)
helpers_test.go:252: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	|----------------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                   Args                                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| mount          | -p functional-264648                                                     | functional-264648 | jenkins | v1.35.0 | 14 Feb 25 21:26 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdspecific-port2271042969/001:/mount-9p |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1 --port 46464                                      |                   |         |         |                     |                     |
	| ssh            | functional-264648 ssh findmnt                                            | functional-264648 | jenkins | v1.35.0 | 14 Feb 25 21:26 UTC |                     |
	|                | -T /mount-9p | grep 9p                                                   |                   |         |         |                     |                     |
	| ssh            | functional-264648 ssh findmnt                                            | functional-264648 | jenkins | v1.35.0 | 14 Feb 25 21:26 UTC | 14 Feb 25 21:26 UTC |
	|                | -T /mount-9p | grep 9p                                                   |                   |         |         |                     |                     |
	| ssh            | functional-264648 ssh -- ls                                              | functional-264648 | jenkins | v1.35.0 | 14 Feb 25 21:26 UTC | 14 Feb 25 21:26 UTC |
	|                | -la /mount-9p                                                            |                   |         |         |                     |                     |
	| ssh            | functional-264648 ssh sudo                                               | functional-264648 | jenkins | v1.35.0 | 14 Feb 25 21:26 UTC |                     |
	|                | umount -f /mount-9p                                                      |                   |         |         |                     |                     |
	| mount          | -p functional-264648                                                     | functional-264648 | jenkins | v1.35.0 | 14 Feb 25 21:26 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup1478408973/001:/mount2   |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| mount          | -p functional-264648                                                     | functional-264648 | jenkins | v1.35.0 | 14 Feb 25 21:26 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup1478408973/001:/mount1   |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| mount          | -p functional-264648                                                     | functional-264648 | jenkins | v1.35.0 | 14 Feb 25 21:26 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup1478408973/001:/mount3   |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| ssh            | functional-264648 ssh findmnt                                            | functional-264648 | jenkins | v1.35.0 | 14 Feb 25 21:26 UTC |                     |
	|                | -T /mount1                                                               |                   |         |         |                     |                     |
	| ssh            | functional-264648 ssh findmnt                                            | functional-264648 | jenkins | v1.35.0 | 14 Feb 25 21:26 UTC | 14 Feb 25 21:26 UTC |
	|                | -T /mount1                                                               |                   |         |         |                     |                     |
	| ssh            | functional-264648 ssh findmnt                                            | functional-264648 | jenkins | v1.35.0 | 14 Feb 25 21:26 UTC | 14 Feb 25 21:26 UTC |
	|                | -T /mount2                                                               |                   |         |         |                     |                     |
	| ssh            | functional-264648 ssh findmnt                                            | functional-264648 | jenkins | v1.35.0 | 14 Feb 25 21:26 UTC | 14 Feb 25 21:26 UTC |
	|                | -T /mount3                                                               |                   |         |         |                     |                     |
	| mount          | -p functional-264648                                                     | functional-264648 | jenkins | v1.35.0 | 14 Feb 25 21:26 UTC |                     |
	|                | --kill=true                                                              |                   |         |         |                     |                     |
	| start          | -p functional-264648                                                     | functional-264648 | jenkins | v1.35.0 | 14 Feb 25 21:26 UTC |                     |
	|                | --dry-run --memory                                                       |                   |         |         |                     |                     |
	|                | 250MB --alsologtostderr                                                  |                   |         |         |                     |                     |
	|                | --driver=docker                                                          |                   |         |         |                     |                     |
	|                | --container-runtime=crio                                                 |                   |         |         |                     |                     |
	| dashboard      | --url --port 36195                                                       | functional-264648 | jenkins | v1.35.0 | 14 Feb 25 21:26 UTC | 14 Feb 25 21:27 UTC |
	|                | -p functional-264648                                                     |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| update-context | functional-264648                                                        | functional-264648 | jenkins | v1.35.0 | 14 Feb 25 21:27 UTC | 14 Feb 25 21:27 UTC |
	|                | update-context                                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                   |                   |         |         |                     |                     |
	| update-context | functional-264648                                                        | functional-264648 | jenkins | v1.35.0 | 14 Feb 25 21:27 UTC | 14 Feb 25 21:27 UTC |
	|                | update-context                                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                   |                   |         |         |                     |                     |
	| update-context | functional-264648                                                        | functional-264648 | jenkins | v1.35.0 | 14 Feb 25 21:27 UTC | 14 Feb 25 21:27 UTC |
	|                | update-context                                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                   |                   |         |         |                     |                     |
	| image          | functional-264648                                                        | functional-264648 | jenkins | v1.35.0 | 14 Feb 25 21:27 UTC | 14 Feb 25 21:27 UTC |
	|                | image ls --format short                                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                        |                   |         |         |                     |                     |
	| image          | functional-264648                                                        | functional-264648 | jenkins | v1.35.0 | 14 Feb 25 21:27 UTC | 14 Feb 25 21:27 UTC |
	|                | image ls --format yaml                                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                        |                   |         |         |                     |                     |
	| ssh            | functional-264648 ssh pgrep                                              | functional-264648 | jenkins | v1.35.0 | 14 Feb 25 21:27 UTC |                     |
	|                | buildkitd                                                                |                   |         |         |                     |                     |
	| image          | functional-264648 image build -t                                         | functional-264648 | jenkins | v1.35.0 | 14 Feb 25 21:27 UTC | 14 Feb 25 21:27 UTC |
	|                | localhost/my-image:functional-264648                                     |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                         |                   |         |         |                     |                     |
	| image          | functional-264648 image ls                                               | functional-264648 | jenkins | v1.35.0 | 14 Feb 25 21:27 UTC | 14 Feb 25 21:27 UTC |
	| image          | functional-264648                                                        | functional-264648 | jenkins | v1.35.0 | 14 Feb 25 21:27 UTC | 14 Feb 25 21:27 UTC |
	|                | image ls --format json                                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                        |                   |         |         |                     |                     |
	| image          | functional-264648                                                        | functional-264648 | jenkins | v1.35.0 | 14 Feb 25 21:27 UTC | 14 Feb 25 21:27 UTC |
	|                | image ls --format table                                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                        |                   |         |         |                     |                     |
	|----------------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/14 21:26:32
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0214 21:26:32.635623  307058 out.go:345] Setting OutFile to fd 1 ...
	I0214 21:26:32.635776  307058 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0214 21:26:32.635800  307058 out.go:358] Setting ErrFile to fd 2...
	I0214 21:26:32.635822  307058 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0214 21:26:32.637455  307058 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20315-272800/.minikube/bin
	I0214 21:26:32.637927  307058 out.go:352] Setting JSON to false
	I0214 21:26:32.638848  307058 start.go:130] hostinfo: {"hostname":"ip-172-31-21-244","uptime":7740,"bootTime":1739560653,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1077-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0214 21:26:32.639276  307058 start.go:140] virtualization:  
	I0214 21:26:32.642965  307058 out.go:177] * [functional-264648] minikube v1.35.0 sur Ubuntu 20.04 (arm64)
	I0214 21:26:32.647090  307058 out.go:177]   - MINIKUBE_LOCATION=20315
	I0214 21:26:32.647258  307058 notify.go:220] Checking for updates...
	I0214 21:26:32.653541  307058 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0214 21:26:32.656482  307058 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20315-272800/kubeconfig
	I0214 21:26:32.659452  307058 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20315-272800/.minikube
	I0214 21:26:32.662443  307058 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0214 21:26:32.665502  307058 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0214 21:26:32.668982  307058 config.go:182] Loaded profile config "functional-264648": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0214 21:26:32.669575  307058 driver.go:394] Setting default libvirt URI to qemu:///system
	I0214 21:26:32.705868  307058 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0214 21:26:32.706011  307058 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0214 21:26:32.773530  307058 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:54 SystemTime:2025-02-14 21:26:32.763690898 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0214 21:26:32.773645  307058 docker.go:318] overlay module found
	I0214 21:26:32.776745  307058 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0214 21:26:32.779634  307058 start.go:304] selected driver: docker
	I0214 21:26:32.779662  307058 start.go:908] validating driver "docker" against &{Name:functional-264648 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-264648 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0214 21:26:32.779787  307058 start.go:919] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0214 21:26:32.783445  307058 out.go:201] 
	W0214 21:26:32.786385  307058 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0214 21:26:32.789210  307058 out.go:201] 
	
	
	==> CRI-O <==
	Feb 14 21:27:07 functional-264648 crio[4159]: time="2025-02-14 21:27:07.437515322Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8,RepoTags:[],RepoDigests:[docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf],Size_:247562353,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=e622f9bb-bb01-4610-815d-9119e3c5ee6c name=/runtime.v1.ImageService/ImageStatus
	Feb 14 21:27:07 functional-264648 crio[4159]: time="2025-02-14 21:27:07.437543481Z" level=info msg="Trying to access \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Feb 14 21:27:07 functional-264648 crio[4159]: time="2025-02-14 21:27:07.438220732Z" level=info msg="Creating container: kubernetes-dashboard/kubernetes-dashboard-7779f9b69b-klvrp/kubernetes-dashboard" id=39bee77c-e208-47cc-8736-3ccb37dae469 name=/runtime.v1.RuntimeService/CreateContainer
	Feb 14 21:27:07 functional-264648 crio[4159]: time="2025-02-14 21:27:07.438303856Z" level=warning msg="Allowed annotations are specified for workload []"
	Feb 14 21:27:07 functional-264648 crio[4159]: time="2025-02-14 21:27:07.457869821Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/566d39cda6e1fe839807885fc95ac183a32a4bd9c34eff920d6ac17dbe41b0b3/merged/etc/group: no such file or directory"
	Feb 14 21:27:07 functional-264648 crio[4159]: time="2025-02-14 21:27:07.503501222Z" level=info msg="Created container a241ed519597bc81b35e97ff6fcbc221acb913a4814eceded4fd3771e0241d12: kubernetes-dashboard/kubernetes-dashboard-7779f9b69b-klvrp/kubernetes-dashboard" id=39bee77c-e208-47cc-8736-3ccb37dae469 name=/runtime.v1.RuntimeService/CreateContainer
	Feb 14 21:27:07 functional-264648 crio[4159]: time="2025-02-14 21:27:07.504301063Z" level=info msg="Starting container: a241ed519597bc81b35e97ff6fcbc221acb913a4814eceded4fd3771e0241d12" id=277bdfb2-9ee9-41f7-9051-07ea2ca71c9d name=/runtime.v1.RuntimeService/StartContainer
	Feb 14 21:27:07 functional-264648 crio[4159]: time="2025-02-14 21:27:07.514464934Z" level=info msg="Started container" PID=7068 containerID=a241ed519597bc81b35e97ff6fcbc221acb913a4814eceded4fd3771e0241d12 description=kubernetes-dashboard/kubernetes-dashboard-7779f9b69b-klvrp/kubernetes-dashboard id=277bdfb2-9ee9-41f7-9051-07ea2ca71c9d name=/runtime.v1.RuntimeService/StartContainer sandboxID=60695cdf176130e27190bcd33086c903d34167bcd0e339c033d1e0d3a69dda34
	Feb 14 21:27:07 functional-264648 crio[4159]: time="2025-02-14 21:27:07.689857369Z" level=info msg="Trying to access \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Feb 14 21:27:08 functional-264648 crio[4159]: time="2025-02-14 21:27:08.982496797Z" level=info msg="Image operating system mismatch: image uses OS \"linux\"+architecture \"amd64\", expecting one of \"linux+arm64\""
	Feb 14 21:27:10 functional-264648 crio[4159]: time="2025-02-14 21:27:10.571402717Z" level=info msg="Pulled image: docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=f799320d-0abd-44d2-8f66-b45a5dbcf8a5 name=/runtime.v1.ImageService/PullImage
	Feb 14 21:27:10 functional-264648 crio[4159]: time="2025-02-14 21:27:10.572234491Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=3ebf7a62-e655-480e-9d68-5cf210948ead name=/runtime.v1.ImageService/ImageStatus
	Feb 14 21:27:10 functional-264648 crio[4159]: time="2025-02-14 21:27:10.573119302Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a,RepoTags:[],RepoDigests:[docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a],Size_:42263767,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=3ebf7a62-e655-480e-9d68-5cf210948ead name=/runtime.v1.ImageService/ImageStatus
	Feb 14 21:27:10 functional-264648 crio[4159]: time="2025-02-14 21:27:10.575229493Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=13013f33-1b57-4d18-813c-fd5ceee08643 name=/runtime.v1.ImageService/ImageStatus
	Feb 14 21:27:10 functional-264648 crio[4159]: time="2025-02-14 21:27:10.576167061Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a,RepoTags:[],RepoDigests:[docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a],Size_:42263767,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=13013f33-1b57-4d18-813c-fd5ceee08643 name=/runtime.v1.ImageService/ImageStatus
	Feb 14 21:27:10 functional-264648 crio[4159]: time="2025-02-14 21:27:10.577047728Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b-q4849/dashboard-metrics-scraper" id=d04dc461-4c4b-42a4-87bd-41ded7191539 name=/runtime.v1.RuntimeService/CreateContainer
	Feb 14 21:27:10 functional-264648 crio[4159]: time="2025-02-14 21:27:10.577158519Z" level=warning msg="Allowed annotations are specified for workload []"
	Feb 14 21:27:10 functional-264648 crio[4159]: time="2025-02-14 21:27:10.599138348Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/c2f76f6d71ff52f0e2956d77cc64be81c67c44f06acd0d7e305d99dd195dd039/merged/etc/group: no such file or directory"
	Feb 14 21:27:10 functional-264648 crio[4159]: time="2025-02-14 21:27:10.640802046Z" level=info msg="Created container 520cfc0080321a06634a4ee09eb469caa18a09ce8a576d4d68b2cdfb4c631e87: kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b-q4849/dashboard-metrics-scraper" id=d04dc461-4c4b-42a4-87bd-41ded7191539 name=/runtime.v1.RuntimeService/CreateContainer
	Feb 14 21:27:10 functional-264648 crio[4159]: time="2025-02-14 21:27:10.641628799Z" level=info msg="Starting container: 520cfc0080321a06634a4ee09eb469caa18a09ce8a576d4d68b2cdfb4c631e87" id=a841a7a6-ddfe-4dcb-9e68-bd73afe3f775 name=/runtime.v1.RuntimeService/StartContainer
	Feb 14 21:27:10 functional-264648 crio[4159]: time="2025-02-14 21:27:10.647278708Z" level=info msg="Started container" PID=7121 containerID=520cfc0080321a06634a4ee09eb469caa18a09ce8a576d4d68b2cdfb4c631e87 description=kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b-q4849/dashboard-metrics-scraper id=a841a7a6-ddfe-4dcb-9e68-bd73afe3f775 name=/runtime.v1.RuntimeService/StartContainer sandboxID=641b51a0a02c9e2666ef318f7406ae9ded872ea6e55d373923d35949ba68b178
	Feb 14 21:27:26 functional-264648 crio[4159]: time="2025-02-14 21:27:26.282697025Z" level=info msg="Pulling image: docker.io/nginx:latest" id=885ca6e0-393f-4136-a019-b674bf43df3f name=/runtime.v1.ImageService/PullImage
	Feb 14 21:27:26 functional-264648 crio[4159]: time="2025-02-14 21:27:26.284182977Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Feb 14 21:28:51 functional-264648 crio[4159]: time="2025-02-14 21:28:51.283167067Z" level=info msg="Pulling image: docker.io/nginx:latest" id=763900a2-89c5-484a-92f8-e07f1ce4fc03 name=/runtime.v1.ImageService/PullImage
	Feb 14 21:28:51 functional-264648 crio[4159]: time="2025-02-14 21:28:51.285544780Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID              POD
	520cfc0080321       docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c   About a minute ago   Running             dashboard-metrics-scraper   0                   641b51a0a02c9       dashboard-metrics-scraper-5d59dccf9b-q4849
	a241ed519597b       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93         About a minute ago   Running             kubernetes-dashboard        0                   60695cdf17613       kubernetes-dashboard-7779f9b69b-klvrp
	ccd964688a002       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e              2 minutes ago        Exited              mount-munger                0                   d72922850b25b       busybox-mount
	02cfeed11946c       72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb                                                 2 minutes ago        Running             echoserver-arm              0                   d7f9c00630795       hello-node-64fc58db8c-j7vnx
	b7cbc0aeb0a06       registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5           3 minutes ago        Running             echoserver-arm              0                   29609373bcf18       hello-node-connect-8449669db6-wwlkb
	680705ba24b30       docker.io/library/nginx@sha256:b471bb609adc83f73c2d95148cf1bd683408739a3c09c0afc666ea2af0037aef                  3 minutes ago        Running             nginx                       0                   b7df05441ee0b       nginx-svc
	b32d25940e454       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4                                                 3 minutes ago        Running             coredns                     2                   1085c9bb50f66       coredns-668d6bf9bc-6c9jg
	f5c81dd4a5c11       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                 3 minutes ago        Running             storage-provisioner         2                   f516406aed503       storage-provisioner
	1150c2e914f3a       e124fbed851d756107a6153db4dc52269a2fd34af3cc46f00a2ef113f868aab0                                                 3 minutes ago        Running             kube-proxy                  2                   3710c96adfdcc       kube-proxy-7rpbv
	e3c8db93046f6       e1181ee320546c66f17956a302db1b7899d88a593f116726718851133de588b6                                                 3 minutes ago        Running             kindnet-cni                 2                   58ca8d809fd7a       kindnet-h2dww
	1d605ae72080c       265c2dedf28ab9b88c7910c1643e210ad62483867f2bab88f56919a6e49a0d19                                                 3 minutes ago        Running             kube-apiserver              0                   8f5c6d19b92ca       kube-apiserver-functional-264648
	cdce3b99100b5       ddb38cac617cb18802e09e448db4b3aa70e9e469b02defa76e6de7192847a71c                                                 3 minutes ago        Running             kube-scheduler              2                   db77c501cb450       kube-scheduler-functional-264648
	f0747de751c87       2933761aa7adae93679cdde1c0bf457bd4dc4b53f95fc066a4c50aa9c375ea13                                                 3 minutes ago        Running             kube-controller-manager     2                   0551b9a968a8f       kube-controller-manager-functional-264648
	fb90ee746f5b8       7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82                                                 3 minutes ago        Running             etcd                        2                   9e000cd07775b       etcd-functional-264648
	51bdf17f03a19       7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82                                                 4 minutes ago        Exited              etcd                        1                   9e000cd07775b       etcd-functional-264648
	00b3b2bada044       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                 4 minutes ago        Exited              storage-provisioner         1                   f516406aed503       storage-provisioner
	5dbd5a7c3b435       2933761aa7adae93679cdde1c0bf457bd4dc4b53f95fc066a4c50aa9c375ea13                                                 4 minutes ago        Exited              kube-controller-manager     1                   0551b9a968a8f       kube-controller-manager-functional-264648
	01a8285141aa4       e1181ee320546c66f17956a302db1b7899d88a593f116726718851133de588b6                                                 4 minutes ago        Exited              kindnet-cni                 1                   58ca8d809fd7a       kindnet-h2dww
	21b7e57d53697       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4                                                 4 minutes ago        Exited              coredns                     1                   1085c9bb50f66       coredns-668d6bf9bc-6c9jg
	2f3a6b102369d       e124fbed851d756107a6153db4dc52269a2fd34af3cc46f00a2ef113f868aab0                                                 4 minutes ago        Exited              kube-proxy                  1                   3710c96adfdcc       kube-proxy-7rpbv
	d0e16c67d7471       ddb38cac617cb18802e09e448db4b3aa70e9e469b02defa76e6de7192847a71c                                                 4 minutes ago        Exited              kube-scheduler              1                   db77c501cb450       kube-scheduler-functional-264648
	
	
	==> coredns [21b7e57d53697107579349c10061d9059a0a2997f2d9ec8c52dff23ce277560a] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:34376 - 25199 "HINFO IN 364422903801211798.829474840632264396. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.057057558s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [b32d25940e454f13980b5d3bbec55752e2f53f9038159abd74c8308750ea92d0] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:47029 - 28330 "HINFO IN 7618471867355907193.8668295440997212219. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.031917065s
	
	
	==> describe nodes <==
	Name:               functional-264648
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-264648
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fbfc2b66c4da8e6c2b2b8466356b2fbd8038ee5a
	                    minikube.k8s.io/name=functional-264648
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_02_14T21_23_43_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 14 Feb 2025 21:23:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-264648
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 14 Feb 2025 21:28:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 14 Feb 2025 21:27:31 +0000   Fri, 14 Feb 2025 21:23:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 14 Feb 2025 21:27:31 +0000   Fri, 14 Feb 2025 21:23:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 14 Feb 2025 21:27:31 +0000   Fri, 14 Feb 2025 21:23:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 14 Feb 2025 21:27:31 +0000   Fri, 14 Feb 2025 21:24:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-264648
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fee28773aee471d886e7a26f7a75c2d
	  System UUID:                53f6cf2f-49dc-4e2f-8aeb-30b951816c30
	  Boot ID:                    e73e80e8-f4f5-4b6f-baaf-c79d4b748ea0
	  Kernel Version:             5.15.0-1077-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-64fc58db8c-j7vnx                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m
	  default                     hello-node-connect-8449669db6-wwlkb           0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m14s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m23s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m2s
	  kube-system                 coredns-668d6bf9bc-6c9jg                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     5m5s
	  kube-system                 etcd-functional-264648                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         5m11s
	  kube-system                 kindnet-h2dww                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      5m5s
	  kube-system                 kube-apiserver-functional-264648              250m (12%)    0 (0%)      0 (0%)           0 (0%)         3m53s
	  kube-system                 kube-controller-manager-functional-264648     200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m9s
	  kube-system                 kube-proxy-7rpbv                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m5s
	  kube-system                 kube-scheduler-functional-264648              100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m9s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m5s
	  kubernetes-dashboard        dashboard-metrics-scraper-5d59dccf9b-q4849    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m17s
	  kubernetes-dashboard        kubernetes-dashboard-7779f9b69b-klvrp         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5m3s                   kube-proxy       
	  Normal   Starting                 3m49s                  kube-proxy       
	  Normal   Starting                 4m32s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  5m17s (x8 over 5m17s)  kubelet          Node functional-264648 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m17s (x8 over 5m17s)  kubelet          Node functional-264648 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m17s (x8 over 5m17s)  kubelet          Node functional-264648 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientPID     5m9s                   kubelet          Node functional-264648 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 5m9s                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m9s                   kubelet          Node functional-264648 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m9s                   kubelet          Node functional-264648 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 5m9s                   kubelet          Starting kubelet.
	  Normal   RegisteredNode           5m6s                   node-controller  Node functional-264648 event: Registered Node functional-264648 in Controller
	  Normal   NodeReady                4m51s                  kubelet          Node functional-264648 status is now: NodeReady
	  Normal   RegisteredNode           4m30s                  node-controller  Node functional-264648 event: Registered Node functional-264648 in Controller
	  Normal   NodeHasSufficientMemory  3m58s (x8 over 3m58s)  kubelet          Node functional-264648 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 3m58s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 3m58s                  kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    3m58s (x8 over 3m58s)  kubelet          Node functional-264648 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m58s (x8 over 3m58s)  kubelet          Node functional-264648 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           3m50s                  node-controller  Node functional-264648 event: Registered Node functional-264648 in Controller
	
	
	==> dmesg <==
	[Feb14 19:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.013894] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.498066] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032966] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.753101] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +4.898997] kauditd_printk_skb: 36 callbacks suppressed
	[Feb14 20:18] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Feb14 20:49] overlayfs: '/var/lib/containers/storage/overlay/l/ZLTOCNGE2IGM6DT7VP2QP7OV3M' not a directory
	[  +1.296116] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	
	
	==> etcd [51bdf17f03a19355dcef7bcb6a3fc26f62d4f09968aa2c5a2c5940cb85e65611] <==
	{"level":"info","ts":"2025-02-14T21:24:15.403361Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2025-02-14T21:24:15.403558Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2025-02-14T21:24:15.403621Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2025-02-14T21:24:15.403657Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2025-02-14T21:24:15.403703Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2025-02-14T21:24:15.403737Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2025-02-14T21:24:15.407262Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-264648 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2025-02-14T21:24:15.407503Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-14T21:24:15.407795Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-14T21:24:15.411232Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-02-14T21:24:15.411312Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-02-14T21:24:15.411814Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-02-14T21:24:15.415314Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2025-02-14T21:24:15.419578Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-02-14T21:24:15.433034Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-02-14T21:24:42.417010Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-02-14T21:24:42.417071Z","caller":"embed/etcd.go:378","msg":"closing etcd server","name":"functional-264648","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"warn","ts":"2025-02-14T21:24:42.417145Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-02-14T21:24:42.417231Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-02-14T21:24:42.502490Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-02-14T21:24:42.502542Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2025-02-14T21:24:42.502595Z","caller":"etcdserver/server.go:1543","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-02-14T21:24:42.505562Z","caller":"embed/etcd.go:582","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-02-14T21:24:42.505688Z","caller":"embed/etcd.go:587","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-02-14T21:24:42.505718Z","caller":"embed/etcd.go:380","msg":"closed etcd server","name":"functional-264648","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [fb90ee746f5b8bdada6de8ac17c65032f463dc7884687e46bb23d3bbb1c38ce5] <==
	{"level":"info","ts":"2025-02-14T21:24:54.114122Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-02-14T21:24:54.114157Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-02-14T21:24:54.114406Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-02-14T21:24:54.114453Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-02-14T21:24:54.115215Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2025-02-14T21:24:54.115433Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2025-02-14T21:24:54.115608Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2025-02-14T21:24:54.115668Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-02-14T21:24:55.795099Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 3"}
	{"level":"info","ts":"2025-02-14T21:24:55.795171Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 3"}
	{"level":"info","ts":"2025-02-14T21:24:55.795198Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2025-02-14T21:24:55.795215Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 4"}
	{"level":"info","ts":"2025-02-14T21:24:55.795223Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 4"}
	{"level":"info","ts":"2025-02-14T21:24:55.795235Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 4"}
	{"level":"info","ts":"2025-02-14T21:24:55.795246Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 4"}
	{"level":"info","ts":"2025-02-14T21:24:55.797618Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-264648 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2025-02-14T21:24:55.797768Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-14T21:24:55.803687Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-02-14T21:24:55.804411Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2025-02-14T21:24:55.804730Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-14T21:24:55.811235Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-02-14T21:24:55.811988Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-02-14T21:24:55.821213Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-02-14T21:24:55.821287Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-02-14T21:25:00.276832Z","caller":"traceutil/trace.go:171","msg":"trace[442129966] transaction","detail":"{read_only:false; response_revision:604; number_of_response:1; }","duration":"105.878699ms","start":"2025-02-14T21:25:00.170934Z","end":"2025-02-14T21:25:00.276813Z","steps":["trace[442129966] 'process raft request'  (duration: 70.377799ms)","trace[442129966] 'compare'  (duration: 35.020797ms)"],"step_count":2}
	
	
	==> kernel <==
	 21:28:51 up  2:11,  0 users,  load average: 0.96, 1.20, 1.78
	Linux functional-264648 5.15.0-1077-aws #84~20.04.1-Ubuntu SMP Mon Jan 20 22:14:27 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [01a8285141aa47aa3318e8d2b3d68432cf8be1153fa02117f506b8bf93061b14] <==
	I0214 21:24:13.882498       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0214 21:24:13.886493       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0214 21:24:13.886754       1 main.go:148] setting mtu 1500 for CNI 
	I0214 21:24:13.886973       1 main.go:178] kindnetd IP family: "ipv4"
	I0214 21:24:13.887028       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0214 21:24:14.172360       1 controller.go:361] Starting controller kube-network-policies
	I0214 21:24:14.181647       1 controller.go:365] Waiting for informer caches to sync
	I0214 21:24:14.181769       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	W0214 21:24:18.435925       1 reflector.go:561] pkg/mod/k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0214 21:24:18.436842       1 reflector.go:158] "Unhandled Error" err="pkg/mod/k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0214 21:24:19.582891       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0214 21:24:19.582925       1 metrics.go:61] Registering metrics
	I0214 21:24:19.582998       1 controller.go:401] Syncing nftables rules
	I0214 21:24:24.172525       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0214 21:24:24.172659       1 main.go:301] handling current node
	I0214 21:24:34.171866       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0214 21:24:34.171980       1 main.go:301] handling current node
	
	
	==> kindnet [e3c8db93046f61abfdd90a300cc2201b49ddfc08b6f213db0a5333b27ff5a1ff] <==
	I0214 21:26:51.012211       1 main.go:301] handling current node
	I0214 21:27:01.003851       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0214 21:27:01.003888       1 main.go:301] handling current node
	I0214 21:27:11.003621       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0214 21:27:11.003663       1 main.go:301] handling current node
	I0214 21:27:21.005694       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0214 21:27:21.005734       1 main.go:301] handling current node
	I0214 21:27:31.005606       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0214 21:27:31.005740       1 main.go:301] handling current node
	I0214 21:27:41.005581       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0214 21:27:41.005640       1 main.go:301] handling current node
	I0214 21:27:51.012471       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0214 21:27:51.012508       1 main.go:301] handling current node
	I0214 21:28:01.003353       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0214 21:28:01.003394       1 main.go:301] handling current node
	I0214 21:28:11.012175       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0214 21:28:11.012222       1 main.go:301] handling current node
	I0214 21:28:21.011155       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0214 21:28:21.011193       1 main.go:301] handling current node
	I0214 21:28:31.004184       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0214 21:28:31.004327       1 main.go:301] handling current node
	I0214 21:28:41.004613       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0214 21:28:41.004746       1 main.go:301] handling current node
	I0214 21:28:51.011201       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0214 21:28:51.011241       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1d605ae72080c38a1891934ad330d7a2938d3788a925cb890ed5eea906ffb852] <==
	I0214 21:24:58.317581       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0214 21:24:58.327975       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0214 21:24:58.328062       1 policy_source.go:240] refreshing policies
	I0214 21:24:58.349902       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0214 21:24:58.366767       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0214 21:24:58.371149       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0214 21:24:58.371418       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0214 21:24:58.387083       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	E0214 21:24:58.404786       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0214 21:24:58.966650       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0214 21:24:59.804753       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0214 21:24:59.925526       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0214 21:24:59.998140       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0214 21:25:00.009600       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0214 21:25:01.466083       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0214 21:25:01.579814       1 controller.go:615] quota admission added evaluator for: endpoints
	I0214 21:25:01.715980       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0214 21:25:16.040970       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.111.51.53"}
	I0214 21:25:28.425763       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.98.19.185"}
	I0214 21:25:38.036919       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.96.162.217"}
	E0214 21:25:49.042755       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:39782: use of closed network connection
	I0214 21:25:51.653372       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.110.205.81"}
	I0214 21:26:33.994472       1 controller.go:615] quota admission added evaluator for: namespaces
	I0214 21:26:34.360232       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.246.113"}
	I0214 21:26:34.385148       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.188.97"}
	
	
	==> kube-controller-manager [5dbd5a7c3b43530d7db99eb8ac14f29f9992bbb51e1e8ad9fcd01a1b16a2b77e] <==
	I0214 21:24:21.575521       1 shared_informer.go:320] Caches are synced for PVC protection
	I0214 21:24:21.575531       1 shared_informer.go:320] Caches are synced for cronjob
	I0214 21:24:21.575545       1 shared_informer.go:320] Caches are synced for attach detach
	I0214 21:24:21.575554       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0214 21:24:21.575564       1 shared_informer.go:320] Caches are synced for expand
	I0214 21:24:21.575821       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0214 21:24:21.579564       1 shared_informer.go:320] Caches are synced for resource quota
	I0214 21:24:21.580732       1 shared_informer.go:320] Caches are synced for resource quota
	I0214 21:24:21.583881       1 shared_informer.go:320] Caches are synced for namespace
	I0214 21:24:21.589111       1 shared_informer.go:320] Caches are synced for disruption
	I0214 21:24:21.591370       1 shared_informer.go:320] Caches are synced for garbage collector
	I0214 21:24:21.593466       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0214 21:24:21.596132       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0214 21:24:21.606416       1 shared_informer.go:320] Caches are synced for HPA
	I0214 21:24:21.609692       1 shared_informer.go:320] Caches are synced for stateful set
	I0214 21:24:21.615887       1 shared_informer.go:320] Caches are synced for job
	I0214 21:24:21.618426       1 shared_informer.go:320] Caches are synced for daemon sets
	I0214 21:24:21.623883       1 shared_informer.go:320] Caches are synced for garbage collector
	I0214 21:24:21.623913       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0214 21:24:21.623924       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0214 21:24:21.832946       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="256.639666ms"
	I0214 21:24:21.833030       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="46.489µs"
	I0214 21:24:23.043384       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-264648"
	I0214 21:24:25.396116       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="13.674138ms"
	I0214 21:24:25.397326       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="1.090541ms"
	
	
	==> kube-controller-manager [f0747de751c87d5b4ce5bd39477e071917b16e00b6a1debe2a8f9bb35ff6b3c2] <==
	E0214 21:26:34.139550       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0214 21:26:34.152976       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="16.905534ms"
	E0214 21:26:34.153033       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0214 21:26:34.162759       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="21.444596ms"
	E0214 21:26:34.162794       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0214 21:26:34.165026       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="10.849573ms"
	E0214 21:26:34.165068       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0214 21:26:34.171291       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="6.968391ms"
	E0214 21:26:34.171332       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0214 21:26:34.172105       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="5.896171ms"
	E0214 21:26:34.172136       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0214 21:26:34.219311       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="31.216328ms"
	I0214 21:26:34.241543       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="21.427586ms"
	I0214 21:26:34.242600       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="140.657µs"
	I0214 21:26:34.244862       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="39.47043ms"
	I0214 21:26:34.267377       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="42.238µs"
	I0214 21:26:34.314019       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="68.78773ms"
	I0214 21:26:34.343954       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="29.81359ms"
	I0214 21:26:34.344177       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="55.622µs"
	I0214 21:27:01.126478       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-264648"
	I0214 21:27:07.773979       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="26.743565ms"
	I0214 21:27:07.774200       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="69.668µs"
	I0214 21:27:10.768457       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="15.282865ms"
	I0214 21:27:10.768568       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="41.664µs"
	I0214 21:27:31.460213       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-264648"
	
	
	==> kube-proxy [1150c2e914f3a93a64df0a8c70ff0d7a75a7a76232d78fb51d0c8acb6c9774ea] <==
	I0214 21:25:01.432441       1 server_linux.go:66] "Using iptables proxy"
	I0214 21:25:01.592748       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0214 21:25:01.592831       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0214 21:25:01.634279       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0214 21:25:01.634352       1 server_linux.go:170] "Using iptables Proxier"
	I0214 21:25:01.636850       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0214 21:25:01.637249       1 server.go:497] "Version info" version="v1.32.1"
	I0214 21:25:01.637304       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0214 21:25:01.639659       1 config.go:199] "Starting service config controller"
	I0214 21:25:01.639715       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0214 21:25:01.639748       1 config.go:105] "Starting endpoint slice config controller"
	I0214 21:25:01.639752       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0214 21:25:01.641426       1 config.go:329] "Starting node config controller"
	I0214 21:25:01.641441       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0214 21:25:01.740196       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0214 21:25:01.740245       1 shared_informer.go:320] Caches are synced for service config
	I0214 21:25:01.741654       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [2f3a6b102369d49f02e7501aef363f13f1e996239a95f07737d7b6b4133c4b24] <==
	I0214 21:24:17.315870       1 server_linux.go:66] "Using iptables proxy"
	I0214 21:24:18.459988       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0214 21:24:18.512472       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0214 21:24:18.763811       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0214 21:24:18.767844       1 server_linux.go:170] "Using iptables Proxier"
	I0214 21:24:18.775918       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0214 21:24:18.776362       1 server.go:497] "Version info" version="v1.32.1"
	I0214 21:24:18.776477       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0214 21:24:18.788659       1 config.go:199] "Starting service config controller"
	I0214 21:24:18.791921       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0214 21:24:18.792110       1 config.go:105] "Starting endpoint slice config controller"
	I0214 21:24:18.792147       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0214 21:24:18.800865       1 config.go:329] "Starting node config controller"
	I0214 21:24:18.803566       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0214 21:24:18.892737       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0214 21:24:18.897038       1 shared_informer.go:320] Caches are synced for service config
	I0214 21:24:18.904033       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [cdce3b99100b5198d8af1fd9f88bfb2c255a64ac06b509870d54686d1ab4a893] <==
	I0214 21:24:57.530538       1 serving.go:386] Generated self-signed cert in-memory
	I0214 21:24:58.399555       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.1"
	I0214 21:24:58.399654       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0214 21:24:58.422696       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0214 21:24:58.424858       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0214 21:24:58.424914       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0214 21:24:58.424969       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0214 21:24:58.427007       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0214 21:24:58.430890       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0214 21:24:58.430498       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0214 21:24:58.431284       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0214 21:24:58.525136       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0214 21:24:58.531662       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0214 21:24:58.531826       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [d0e16c67d74713efc215b3362d5d97d1adfc343e031a4697e1ce477317553bc2] <==
	I0214 21:24:16.164447       1 serving.go:386] Generated self-signed cert in-memory
	W0214 21:24:18.393927       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0214 21:24:18.394032       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0214 21:24:18.394066       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0214 21:24:18.394108       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0214 21:24:18.433399       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.1"
	I0214 21:24:18.433504       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0214 21:24:18.442376       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0214 21:24:18.442654       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0214 21:24:18.442713       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0214 21:24:18.442756       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0214 21:24:18.543263       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0214 21:24:42.416722       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0214 21:24:42.416768       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0214 21:24:42.416862       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0214 21:24:42.417259       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Feb 14 21:27:53 functional-264648 kubelet[4446]: E0214 21:27:53.397291    4446 manager.go:1116] Failed to create existing container: /crio-3710c96adfdcc18f1bb96619d1454edcdfa56cb6d99263277e16eb764208c058: Error finding container 3710c96adfdcc18f1bb96619d1454edcdfa56cb6d99263277e16eb764208c058: Status 404 returned error can't find the container with id 3710c96adfdcc18f1bb96619d1454edcdfa56cb6d99263277e16eb764208c058
	Feb 14 21:27:53 functional-264648 kubelet[4446]: E0214 21:27:53.397507    4446 manager.go:1116] Failed to create existing container: /crio-1085c9bb50f6634180d42a788ffb932e3b640c5dd2257a658db43b03c2736d25: Error finding container 1085c9bb50f6634180d42a788ffb932e3b640c5dd2257a658db43b03c2736d25: Status 404 returned error can't find the container with id 1085c9bb50f6634180d42a788ffb932e3b640c5dd2257a658db43b03c2736d25
	Feb 14 21:27:53 functional-264648 kubelet[4446]: E0214 21:27:53.397705    4446 manager.go:1116] Failed to create existing container: /docker/25670fe1b4bc753554415b10d89c74ded3d489e1afed3cecb33e875fc010af84/crio-0551b9a968a8fd2fe7685d5346fcf06ecc3b1798dd6aa701b71ea3c927540517: Error finding container 0551b9a968a8fd2fe7685d5346fcf06ecc3b1798dd6aa701b71ea3c927540517: Status 404 returned error can't find the container with id 0551b9a968a8fd2fe7685d5346fcf06ecc3b1798dd6aa701b71ea3c927540517
	Feb 14 21:27:53 functional-264648 kubelet[4446]: E0214 21:27:53.398810    4446 manager.go:1116] Failed to create existing container: /crio-58ca8d809fd7a9ef954a58069ef43ba95f7b7669728890bf49c49f2e6e148907: Error finding container 58ca8d809fd7a9ef954a58069ef43ba95f7b7669728890bf49c49f2e6e148907: Status 404 returned error can't find the container with id 58ca8d809fd7a9ef954a58069ef43ba95f7b7669728890bf49c49f2e6e148907
	Feb 14 21:27:53 functional-264648 kubelet[4446]: E0214 21:27:53.400788    4446 manager.go:1116] Failed to create existing container: /docker/25670fe1b4bc753554415b10d89c74ded3d489e1afed3cecb33e875fc010af84/crio-db77c501cb450e22d5aa876f86005893dc3739695a98e32481719a5a86159fe7: Error finding container db77c501cb450e22d5aa876f86005893dc3739695a98e32481719a5a86159fe7: Status 404 returned error can't find the container with id db77c501cb450e22d5aa876f86005893dc3739695a98e32481719a5a86159fe7
	Feb 14 21:27:53 functional-264648 kubelet[4446]: E0214 21:27:53.401817    4446 manager.go:1116] Failed to create existing container: /crio-f516406aed503efe4fe8d547ff5f301b80084cf114b46ad1be862bc4b2df4329: Error finding container f516406aed503efe4fe8d547ff5f301b80084cf114b46ad1be862bc4b2df4329: Status 404 returned error can't find the container with id f516406aed503efe4fe8d547ff5f301b80084cf114b46ad1be862bc4b2df4329
	Feb 14 21:27:53 functional-264648 kubelet[4446]: E0214 21:27:53.428161    4446 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739568473427979842,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:304870,},InodesUsed:&UInt64Value{Value:139,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 14 21:27:53 functional-264648 kubelet[4446]: E0214 21:27:53.428394    4446 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739568473427979842,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:304870,},InodesUsed:&UInt64Value{Value:139,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 14 21:28:03 functional-264648 kubelet[4446]: E0214 21:28:03.430113    4446 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739568483429879760,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:304870,},InodesUsed:&UInt64Value{Value:139,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 14 21:28:03 functional-264648 kubelet[4446]: E0214 21:28:03.430155    4446 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739568483429879760,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:304870,},InodesUsed:&UInt64Value{Value:139,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 14 21:28:10 functional-264648 kubelet[4446]: E0214 21:28:10.770162    4446 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = loading manifest for target platform: reading manifest sha256:a90c36c019f67444c32f4d361084d4b5b853dbf8701055df7908bf42cc613cdd in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Feb 14 21:28:10 functional-264648 kubelet[4446]: E0214 21:28:10.770227    4446 kuberuntime_image.go:55] "Failed to pull image" err="loading manifest for target platform: reading manifest sha256:a90c36c019f67444c32f4d361084d4b5b853dbf8701055df7908bf42cc613cdd in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Feb 14 21:28:10 functional-264648 kubelet[4446]: E0214 21:28:10.770333    4446 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:myfrontend,Image:docker.io/nginx,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mypd,ReadOnly:false,MountPath:/tmp/mount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mztr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},Re
startPolicy:nil,} start failed in pod sp-pod_default(32713a12-848d-43f5-b2ad-870e19e9bc10): ErrImagePull: loading manifest for target platform: reading manifest sha256:a90c36c019f67444c32f4d361084d4b5b853dbf8701055df7908bf42cc613cdd in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Feb 14 21:28:10 functional-264648 kubelet[4446]: E0214 21:28:10.771681    4446 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"loading manifest for target platform: reading manifest sha256:a90c36c019f67444c32f4d361084d4b5b853dbf8701055df7908bf42cc613cdd in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="32713a12-848d-43f5-b2ad-870e19e9bc10"
	Feb 14 21:28:13 functional-264648 kubelet[4446]: E0214 21:28:13.431639    4446 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739568493431453457,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:304870,},InodesUsed:&UInt64Value{Value:139,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 14 21:28:13 functional-264648 kubelet[4446]: E0214 21:28:13.431675    4446 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739568493431453457,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:304870,},InodesUsed:&UInt64Value{Value:139,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 14 21:28:16 functional-264648 kubelet[4446]: E0214 21:28:16.352348    4446 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/f42cbfcd717e15e9d325f7efe725e64daec967b0fa61d33f73f88d22f134d950/diff" to get inode usage: stat /var/lib/containers/storage/overlay/f42cbfcd717e15e9d325f7efe725e64daec967b0fa61d33f73f88d22f134d950/diff: no such file or directory, extraDiskErr: <nil>
	Feb 14 21:28:22 functional-264648 kubelet[4446]: E0214 21:28:22.281457    4446 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: loading manifest for target platform: reading manifest sha256:a90c36c019f67444c32f4d361084d4b5b853dbf8701055df7908bf42cc613cdd in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="32713a12-848d-43f5-b2ad-870e19e9bc10"
	Feb 14 21:28:23 functional-264648 kubelet[4446]: E0214 21:28:23.433118    4446 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739568503432933467,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:304870,},InodesUsed:&UInt64Value{Value:139,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 14 21:28:23 functional-264648 kubelet[4446]: E0214 21:28:23.433155    4446 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739568503432933467,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:304870,},InodesUsed:&UInt64Value{Value:139,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 14 21:28:33 functional-264648 kubelet[4446]: E0214 21:28:33.434578    4446 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739568513434399711,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:304870,},InodesUsed:&UInt64Value{Value:139,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 14 21:28:33 functional-264648 kubelet[4446]: E0214 21:28:33.434622    4446 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739568513434399711,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:304870,},InodesUsed:&UInt64Value{Value:139,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 14 21:28:37 functional-264648 kubelet[4446]: E0214 21:28:37.282047    4446 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: loading manifest for target platform: reading manifest sha256:a90c36c019f67444c32f4d361084d4b5b853dbf8701055df7908bf42cc613cdd in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="32713a12-848d-43f5-b2ad-870e19e9bc10"
	Feb 14 21:28:43 functional-264648 kubelet[4446]: E0214 21:28:43.435971    4446 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739568523435789682,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:304870,},InodesUsed:&UInt64Value{Value:139,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 14 21:28:43 functional-264648 kubelet[4446]: E0214 21:28:43.436005    4446 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739568523435789682,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:304870,},InodesUsed:&UInt64Value{Value:139,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> kubernetes-dashboard [a241ed519597bc81b35e97ff6fcbc221acb913a4814eceded4fd3771e0241d12] <==
	2025/02/14 21:27:07 Using namespace: kubernetes-dashboard
	2025/02/14 21:27:07 Using in-cluster config to connect to apiserver
	2025/02/14 21:27:07 Using secret token for csrf signing
	2025/02/14 21:27:07 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/02/14 21:27:07 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/02/14 21:27:07 Successful initial request to the apiserver, version: v1.32.1
	2025/02/14 21:27:07 Generating JWE encryption key
	2025/02/14 21:27:07 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/02/14 21:27:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/02/14 21:27:07 Initializing JWE encryption key from synchronized object
	2025/02/14 21:27:07 Creating in-cluster Sidecar client
	2025/02/14 21:27:07 Serving insecurely on HTTP port: 9090
	2025/02/14 21:27:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/02/14 21:27:37 Successful request to sidecar
	2025/02/14 21:27:07 Starting overwatch
	
	
	==> storage-provisioner [00b3b2bada0441f467415734f2d6f8fc91f2783661b0af802fbdc6258f31c41a] <==
	I0214 21:24:14.976999       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0214 21:24:18.534225       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0214 21:24:18.534287       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0214 21:24:35.937391       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0214 21:24:35.937579       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-264648_2a85afc6-9d70-47f3-9a2f-54cd2d4d2ad5!
	I0214 21:24:35.939042       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cea97ec5-8693-4b89-91a7-d4122c205248", APIVersion:"v1", ResourceVersion:"533", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-264648_2a85afc6-9d70-47f3-9a2f-54cd2d4d2ad5 became leader
	I0214 21:24:36.038004       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-264648_2a85afc6-9d70-47f3-9a2f-54cd2d4d2ad5!
	
	
	==> storage-provisioner [f5c81dd4a5c1109c7fe84141ab6af2e21420462d47758048068959f3689b5832] <==
	I0214 21:25:01.202870       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0214 21:25:01.407419       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0214 21:25:01.407538       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0214 21:25:18.871087       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0214 21:25:18.871276       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-264648_ea32b71f-a2d6-4ce6-915e-18ba6d1ceabd!
	I0214 21:25:18.871960       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cea97ec5-8693-4b89-91a7-d4122c205248", APIVersion:"v1", ResourceVersion:"663", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-264648_ea32b71f-a2d6-4ce6-915e-18ba6d1ceabd became leader
	I0214 21:25:18.972215       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-264648_ea32b71f-a2d6-4ce6-915e-18ba6d1ceabd!
	I0214 21:25:36.645728       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0214 21:25:36.645859       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    f6bf2a6e-b100-4892-82e3-0d1c2394514a 338 0 2025-02-14 21:23:46 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2025-02-14 21:23:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-9dcc85f9-672e-451f-9dde-c6d101fa048c &PersistentVolumeClaim{ObjectMeta:{myclaim  default  9dcc85f9-672e-451f-9dde-c6d101fa048c 708 0 2025-02-14 21:25:36 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2025-02-14 21:25:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2025-02-14 21:25:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0214 21:25:36.647450       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"9dcc85f9-672e-451f-9dde-c6d101fa048c", APIVersion:"v1", ResourceVersion:"708", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0214 21:25:36.651869       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-9dcc85f9-672e-451f-9dde-c6d101fa048c" provisioned
	I0214 21:25:36.651958       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0214 21:25:36.651989       1 volume_store.go:212] Trying to save persistentvolume "pvc-9dcc85f9-672e-451f-9dde-c6d101fa048c"
	I0214 21:25:36.691204       1 volume_store.go:219] persistentvolume "pvc-9dcc85f9-672e-451f-9dde-c6d101fa048c" saved
	I0214 21:25:36.693196       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"9dcc85f9-672e-451f-9dde-c6d101fa048c", APIVersion:"v1", ResourceVersion:"708", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-9dcc85f9-672e-451f-9dde-c6d101fa048c
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-264648 -n functional-264648
helpers_test.go:261: (dbg) Run:  kubectl --context functional-264648 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount sp-pod
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-264648 describe pod busybox-mount sp-pod
helpers_test.go:282: (dbg) kubectl --context functional-264648 describe pod busybox-mount sp-pod:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-264648/192.168.49.2
	Start Time:       Fri, 14 Feb 2025 21:26:03 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  mount-munger:
	    Container ID:  cri-o://ccd964688a00205151b81e80701b0d4f2e12bd6c2aedc480b2f3ba6973bece67
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Fri, 14 Feb 2025 21:26:23 +0000
	      Finished:     Fri, 14 Feb 2025 21:26:23 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kdzrm (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-kdzrm:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  2m49s  default-scheduler  Successfully assigned default/busybox-mount to functional-264648
	  Normal  Pulling    2m50s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     2m30s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 3.156s (19.905s including waiting). Image size: 3774172 bytes.
	  Normal  Created    2m30s  kubelet            Created container: mount-munger
	  Normal  Started    2m30s  kubelet            Started container mount-munger
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-264648/192.168.49.2
	Start Time:       Fri, 14 Feb 2025 21:25:49 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mztr8 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-mztr8:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  3m3s                  default-scheduler  Successfully assigned default/sp-pod to functional-264648
	  Warning  Failed     111s (x2 over 2m33s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     43s (x3 over 2m33s)   kubelet            Error: ErrImagePull
	  Warning  Failed     43s                   kubelet            Failed to pull image "docker.io/nginx": loading manifest for target platform: reading manifest sha256:a90c36c019f67444c32f4d361084d4b5b853dbf8701055df7908bf42cc613cdd in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   BackOff    16s (x4 over 2m33s)   kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     16s (x4 over 2m33s)   kubelet            Error: ImagePullBackOff
	  Normal   Pulling    2s (x4 over 3m3s)     kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (202.62s)

                                                
                                    
x
+
TestScheduledStopUnix (36.84s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-183854 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-183854 --memory=2048 --driver=docker  --container-runtime=crio: (31.759158283s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-183854 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-183854 -n scheduled-stop-183854
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-183854 --schedule 15s
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:98: process 415026 running but should have been killed on reschedule of stop
panic.go:629: *** TestScheduledStopUnix FAILED at 2025-02-14 21:53:59.451954358 +0000 UTC m=+2377.531079642
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestScheduledStopUnix]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect scheduled-stop-183854
helpers_test.go:235: (dbg) docker inspect scheduled-stop-183854:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "96bcabb2f668aaacf3e4901ec5c8901cbc0123ed265214f34be1bbac3e48e916",
	        "Created": "2025-02-14T21:53:32.789564076Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 413081,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-02-14T21:53:32.934960823Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:86f383d95829214691bb905fe90945d8bf2efbbe5a717e0830a616744d143ec9",
	        "ResolvConfPath": "/var/lib/docker/containers/96bcabb2f668aaacf3e4901ec5c8901cbc0123ed265214f34be1bbac3e48e916/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/96bcabb2f668aaacf3e4901ec5c8901cbc0123ed265214f34be1bbac3e48e916/hostname",
	        "HostsPath": "/var/lib/docker/containers/96bcabb2f668aaacf3e4901ec5c8901cbc0123ed265214f34be1bbac3e48e916/hosts",
	        "LogPath": "/var/lib/docker/containers/96bcabb2f668aaacf3e4901ec5c8901cbc0123ed265214f34be1bbac3e48e916/96bcabb2f668aaacf3e4901ec5c8901cbc0123ed265214f34be1bbac3e48e916-json.log",
	        "Name": "/scheduled-stop-183854",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "scheduled-stop-183854:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "scheduled-stop-183854",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/81ed36323cae7254d883943bd2440df6511d6584d98a9634cfb357af1fd21b78-init/diff:/var/lib/docker/overlay2/98047733aa5d86fafdd36d9f264e1aa5c3c6b5243d320c9d2e042ec72038fd21/diff",
	                "MergedDir": "/var/lib/docker/overlay2/81ed36323cae7254d883943bd2440df6511d6584d98a9634cfb357af1fd21b78/merged",
	                "UpperDir": "/var/lib/docker/overlay2/81ed36323cae7254d883943bd2440df6511d6584d98a9634cfb357af1fd21b78/diff",
	                "WorkDir": "/var/lib/docker/overlay2/81ed36323cae7254d883943bd2440df6511d6584d98a9634cfb357af1fd21b78/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "scheduled-stop-183854",
	                "Source": "/var/lib/docker/volumes/scheduled-stop-183854/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "scheduled-stop-183854",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "scheduled-stop-183854",
	                "name.minikube.sigs.k8s.io": "scheduled-stop-183854",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fbe84abb24f8e3c77ca96564bcf4cc6f1a912f51fec918eedf38f09730305c28",
	            "SandboxKey": "/var/run/docker/netns/fbe84abb24f8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33332"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33333"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33336"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33334"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33335"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "scheduled-stop-183854": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null,
	                    "NetworkID": "b490763631c1be995b08365fd95155c93e93f7debf60ac26839ffb2ba28cbf54",
	                    "EndpointID": "428de9d66426b11adb144989e79d90e8173f161fce4e4a40f7accbb8a03e71eb",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "scheduled-stop-183854",
	                        "96bcabb2f668"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-183854 -n scheduled-stop-183854
helpers_test.go:244: <<< TestScheduledStopUnix FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestScheduledStopUnix]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p scheduled-stop-183854 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p scheduled-stop-183854 logs -n 25: (1.364551627s)
helpers_test.go:252: TestScheduledStopUnix logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |        Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| stop    | -p multinode-076743            | multinode-076743      | jenkins | v1.35.0 | 14 Feb 25 21:47 UTC | 14 Feb 25 21:48 UTC |
	| start   | -p multinode-076743            | multinode-076743      | jenkins | v1.35.0 | 14 Feb 25 21:48 UTC | 14 Feb 25 21:49 UTC |
	|         | --wait=true -v=5               |                       |         |         |                     |                     |
	|         | --alsologtostderr              |                       |         |         |                     |                     |
	| node    | list -p multinode-076743       | multinode-076743      | jenkins | v1.35.0 | 14 Feb 25 21:49 UTC |                     |
	| node    | multinode-076743 node delete   | multinode-076743      | jenkins | v1.35.0 | 14 Feb 25 21:49 UTC | 14 Feb 25 21:49 UTC |
	|         | m03                            |                       |         |         |                     |                     |
	| stop    | multinode-076743 stop          | multinode-076743      | jenkins | v1.35.0 | 14 Feb 25 21:49 UTC | 14 Feb 25 21:49 UTC |
	| start   | -p multinode-076743            | multinode-076743      | jenkins | v1.35.0 | 14 Feb 25 21:49 UTC | 14 Feb 25 21:50 UTC |
	|         | --wait=true -v=5               |                       |         |         |                     |                     |
	|         | --alsologtostderr              |                       |         |         |                     |                     |
	|         | --driver=docker                |                       |         |         |                     |                     |
	|         | --container-runtime=crio       |                       |         |         |                     |                     |
	| node    | list -p multinode-076743       | multinode-076743      | jenkins | v1.35.0 | 14 Feb 25 21:50 UTC |                     |
	| start   | -p multinode-076743-m02        | multinode-076743-m02  | jenkins | v1.35.0 | 14 Feb 25 21:50 UTC |                     |
	|         | --driver=docker                |                       |         |         |                     |                     |
	|         | --container-runtime=crio       |                       |         |         |                     |                     |
	| start   | -p multinode-076743-m03        | multinode-076743-m03  | jenkins | v1.35.0 | 14 Feb 25 21:50 UTC | 14 Feb 25 21:51 UTC |
	|         | --driver=docker                |                       |         |         |                     |                     |
	|         | --container-runtime=crio       |                       |         |         |                     |                     |
	| node    | add -p multinode-076743        | multinode-076743      | jenkins | v1.35.0 | 14 Feb 25 21:51 UTC |                     |
	| delete  | -p multinode-076743-m03        | multinode-076743-m03  | jenkins | v1.35.0 | 14 Feb 25 21:51 UTC | 14 Feb 25 21:51 UTC |
	| delete  | -p multinode-076743            | multinode-076743      | jenkins | v1.35.0 | 14 Feb 25 21:51 UTC | 14 Feb 25 21:51 UTC |
	| start   | -p test-preload-940515         | test-preload-940515   | jenkins | v1.35.0 | 14 Feb 25 21:51 UTC | 14 Feb 25 21:52 UTC |
	|         | --memory=2200                  |                       |         |         |                     |                     |
	|         | --alsologtostderr              |                       |         |         |                     |                     |
	|         | --wait=true --preload=false    |                       |         |         |                     |                     |
	|         | --driver=docker                |                       |         |         |                     |                     |
	|         | --container-runtime=crio       |                       |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4   |                       |         |         |                     |                     |
	| image   | test-preload-940515 image pull | test-preload-940515   | jenkins | v1.35.0 | 14 Feb 25 21:52 UTC | 14 Feb 25 21:52 UTC |
	|         | gcr.io/k8s-minikube/busybox    |                       |         |         |                     |                     |
	| stop    | -p test-preload-940515         | test-preload-940515   | jenkins | v1.35.0 | 14 Feb 25 21:52 UTC | 14 Feb 25 21:52 UTC |
	| start   | -p test-preload-940515         | test-preload-940515   | jenkins | v1.35.0 | 14 Feb 25 21:52 UTC | 14 Feb 25 21:53 UTC |
	|         | --memory=2200                  |                       |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                       |         |         |                     |                     |
	|         | --wait=true --driver=docker    |                       |         |         |                     |                     |
	|         | --container-runtime=crio       |                       |         |         |                     |                     |
	| image   | test-preload-940515 image list | test-preload-940515   | jenkins | v1.35.0 | 14 Feb 25 21:53 UTC | 14 Feb 25 21:53 UTC |
	| delete  | -p test-preload-940515         | test-preload-940515   | jenkins | v1.35.0 | 14 Feb 25 21:53 UTC | 14 Feb 25 21:53 UTC |
	| start   | -p scheduled-stop-183854       | scheduled-stop-183854 | jenkins | v1.35.0 | 14 Feb 25 21:53 UTC | 14 Feb 25 21:53 UTC |
	|         | --memory=2048 --driver=docker  |                       |         |         |                     |                     |
	|         | --container-runtime=crio       |                       |         |         |                     |                     |
	| stop    | -p scheduled-stop-183854       | scheduled-stop-183854 | jenkins | v1.35.0 | 14 Feb 25 21:53 UTC |                     |
	|         | --schedule 5m                  |                       |         |         |                     |                     |
	| stop    | -p scheduled-stop-183854       | scheduled-stop-183854 | jenkins | v1.35.0 | 14 Feb 25 21:53 UTC |                     |
	|         | --schedule 5m                  |                       |         |         |                     |                     |
	| stop    | -p scheduled-stop-183854       | scheduled-stop-183854 | jenkins | v1.35.0 | 14 Feb 25 21:53 UTC |                     |
	|         | --schedule 5m                  |                       |         |         |                     |                     |
	| stop    | -p scheduled-stop-183854       | scheduled-stop-183854 | jenkins | v1.35.0 | 14 Feb 25 21:53 UTC |                     |
	|         | --schedule 15s                 |                       |         |         |                     |                     |
	| stop    | -p scheduled-stop-183854       | scheduled-stop-183854 | jenkins | v1.35.0 | 14 Feb 25 21:53 UTC |                     |
	|         | --schedule 15s                 |                       |         |         |                     |                     |
	| stop    | -p scheduled-stop-183854       | scheduled-stop-183854 | jenkins | v1.35.0 | 14 Feb 25 21:53 UTC |                     |
	|         | --schedule 15s                 |                       |         |         |                     |                     |
	|---------|--------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/14 21:53:27
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0214 21:53:27.232986  412584 out.go:345] Setting OutFile to fd 1 ...
	I0214 21:53:27.233102  412584 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0214 21:53:27.233107  412584 out.go:358] Setting ErrFile to fd 2...
	I0214 21:53:27.233110  412584 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0214 21:53:27.233397  412584 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20315-272800/.minikube/bin
	I0214 21:53:27.233870  412584 out.go:352] Setting JSON to false
	I0214 21:53:27.234728  412584 start.go:130] hostinfo: {"hostname":"ip-172-31-21-244","uptime":9354,"bootTime":1739560653,"procs":162,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1077-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0214 21:53:27.234801  412584 start.go:140] virtualization:  
	I0214 21:53:27.241157  412584 out.go:177] * [scheduled-stop-183854] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0214 21:53:27.244690  412584 out.go:177]   - MINIKUBE_LOCATION=20315
	I0214 21:53:27.244792  412584 notify.go:220] Checking for updates...
	I0214 21:53:27.251400  412584 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0214 21:53:27.254623  412584 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20315-272800/kubeconfig
	I0214 21:53:27.257804  412584 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20315-272800/.minikube
	I0214 21:53:27.260948  412584 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0214 21:53:27.263908  412584 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0214 21:53:27.267262  412584 driver.go:394] Setting default libvirt URI to qemu:///system
	I0214 21:53:27.302671  412584 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0214 21:53:27.302789  412584 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0214 21:53:27.360841  412584 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:true NGoroutines:43 SystemTime:2025-02-14 21:53:27.351508227 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0214 21:53:27.360939  412584 docker.go:318] overlay module found
	I0214 21:53:27.364192  412584 out.go:177] * Using the docker driver based on user configuration
	I0214 21:53:27.367193  412584 start.go:304] selected driver: docker
	I0214 21:53:27.367202  412584 start.go:908] validating driver "docker" against <nil>
	I0214 21:53:27.367215  412584 start.go:919] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0214 21:53:27.367979  412584 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0214 21:53:27.431469  412584 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:true NGoroutines:43 SystemTime:2025-02-14 21:53:27.420974364 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0214 21:53:27.431704  412584 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0214 21:53:27.431975  412584 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0214 21:53:27.434973  412584 out.go:177] * Using Docker driver with root privileges
	I0214 21:53:27.438116  412584 cni.go:84] Creating CNI manager for ""
	I0214 21:53:27.438181  412584 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0214 21:53:27.438191  412584 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0214 21:53:27.438270  412584 start.go:347] cluster config:
	{Name:scheduled-stop-183854 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:scheduled-stop-183854 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0214 21:53:27.441770  412584 out.go:177] * Starting "scheduled-stop-183854" primary control-plane node in "scheduled-stop-183854" cluster
	I0214 21:53:27.444685  412584 cache.go:121] Beginning downloading kic base image for docker with crio
	I0214 21:53:27.447656  412584 out.go:177] * Pulling base image v0.0.46-1739182054-20387 ...
	I0214 21:53:27.450586  412584 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad in local docker daemon
	I0214 21:53:27.450815  412584 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0214 21:53:27.450849  412584 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20315-272800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-arm64.tar.lz4
	I0214 21:53:27.450856  412584 cache.go:56] Caching tarball of preloaded images
	I0214 21:53:27.450928  412584 preload.go:172] Found /home/jenkins/minikube-integration/20315-272800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0214 21:53:27.450937  412584 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0214 21:53:27.451327  412584 profile.go:143] Saving config to /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/scheduled-stop-183854/config.json ...
	I0214 21:53:27.451349  412584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/scheduled-stop-183854/config.json: {Name:mke8f4ba957c343347ee3747143ba01c72aef100 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 21:53:27.473575  412584 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad in local docker daemon, skipping pull
	I0214 21:53:27.473588  412584 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad exists in daemon, skipping load
	I0214 21:53:27.473611  412584 cache.go:230] Successfully downloaded all kic artifacts
	I0214 21:53:27.473654  412584 start.go:360] acquireMachinesLock for scheduled-stop-183854: {Name:mk545d296fc9ee1ead3b8ffe1cba95bf86a5d26d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0214 21:53:27.473794  412584 start.go:364] duration metric: took 119.726µs to acquireMachinesLock for "scheduled-stop-183854"
	I0214 21:53:27.473821  412584 start.go:93] Provisioning new machine with config: &{Name:scheduled-stop-183854 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:scheduled-stop-183854 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: St
aticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0214 21:53:27.473932  412584 start.go:125] createHost starting for "" (driver="docker")
	I0214 21:53:27.477318  412584 out.go:235] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0214 21:53:27.477570  412584 start.go:159] libmachine.API.Create for "scheduled-stop-183854" (driver="docker")
	I0214 21:53:27.477600  412584 client.go:168] LocalClient.Create starting
	I0214 21:53:27.477686  412584 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20315-272800/.minikube/certs/ca.pem
	I0214 21:53:27.477720  412584 main.go:141] libmachine: Decoding PEM data...
	I0214 21:53:27.477732  412584 main.go:141] libmachine: Parsing certificate...
	I0214 21:53:27.477784  412584 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20315-272800/.minikube/certs/cert.pem
	I0214 21:53:27.477804  412584 main.go:141] libmachine: Decoding PEM data...
	I0214 21:53:27.477815  412584 main.go:141] libmachine: Parsing certificate...
	I0214 21:53:27.478191  412584 cli_runner.go:164] Run: docker network inspect scheduled-stop-183854 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0214 21:53:27.494634  412584 cli_runner.go:211] docker network inspect scheduled-stop-183854 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0214 21:53:27.494728  412584 network_create.go:284] running [docker network inspect scheduled-stop-183854] to gather additional debugging logs...
	I0214 21:53:27.494745  412584 cli_runner.go:164] Run: docker network inspect scheduled-stop-183854
	W0214 21:53:27.511374  412584 cli_runner.go:211] docker network inspect scheduled-stop-183854 returned with exit code 1
	I0214 21:53:27.511398  412584 network_create.go:287] error running [docker network inspect scheduled-stop-183854]: docker network inspect scheduled-stop-183854: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network scheduled-stop-183854 not found
	I0214 21:53:27.511411  412584 network_create.go:289] output of [docker network inspect scheduled-stop-183854]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network scheduled-stop-183854 not found
	
	** /stderr **
	I0214 21:53:27.511512  412584 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0214 21:53:27.529149  412584 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d0519224eb73 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:ed:9e:3b:21} reservation:<nil>}
	I0214 21:53:27.529506  412584 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-979f597b6546 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:20:22:ea:d0} reservation:<nil>}
	I0214 21:53:27.529759  412584 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b3c82e646a5f IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:c3:9a:31:5f} reservation:<nil>}
	I0214 21:53:27.530176  412584 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001985b00}
	I0214 21:53:27.530194  412584 network_create.go:124] attempt to create docker network scheduled-stop-183854 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0214 21:53:27.530248  412584 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=scheduled-stop-183854 scheduled-stop-183854
	I0214 21:53:27.602722  412584 network_create.go:108] docker network scheduled-stop-183854 192.168.76.0/24 created
	I0214 21:53:27.602743  412584 kic.go:121] calculated static IP "192.168.76.2" for the "scheduled-stop-183854" container
	I0214 21:53:27.602812  412584 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0214 21:53:27.619602  412584 cli_runner.go:164] Run: docker volume create scheduled-stop-183854 --label name.minikube.sigs.k8s.io=scheduled-stop-183854 --label created_by.minikube.sigs.k8s.io=true
	I0214 21:53:27.636998  412584 oci.go:103] Successfully created a docker volume scheduled-stop-183854
	I0214 21:53:27.637088  412584 cli_runner.go:164] Run: docker run --rm --name scheduled-stop-183854-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=scheduled-stop-183854 --entrypoint /usr/bin/test -v scheduled-stop-183854:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad -d /var/lib
	I0214 21:53:28.216213  412584 oci.go:107] Successfully prepared a docker volume scheduled-stop-183854
	I0214 21:53:28.216257  412584 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0214 21:53:28.216275  412584 kic.go:194] Starting extracting preloaded images to volume ...
	I0214 21:53:28.216339  412584 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20315-272800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v scheduled-stop-183854:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad -I lz4 -xf /preloaded.tar -C /extractDir
	I0214 21:53:32.726490  412584 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20315-272800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v scheduled-stop-183854:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad -I lz4 -xf /preloaded.tar -C /extractDir: (4.510103604s)
	I0214 21:53:32.726511  412584 kic.go:203] duration metric: took 4.510233069s to extract preloaded images to volume ...
	W0214 21:53:32.726654  412584 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0214 21:53:32.726757  412584 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0214 21:53:32.774712  412584 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname scheduled-stop-183854 --name scheduled-stop-183854 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=scheduled-stop-183854 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=scheduled-stop-183854 --network scheduled-stop-183854 --ip 192.168.76.2 --volume scheduled-stop-183854:/var --security-opt apparmor=unconfined --memory=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad
	I0214 21:53:33.132158  412584 cli_runner.go:164] Run: docker container inspect scheduled-stop-183854 --format={{.State.Running}}
	I0214 21:53:33.157651  412584 cli_runner.go:164] Run: docker container inspect scheduled-stop-183854 --format={{.State.Status}}
	I0214 21:53:33.181697  412584 cli_runner.go:164] Run: docker exec scheduled-stop-183854 stat /var/lib/dpkg/alternatives/iptables
	I0214 21:53:33.231877  412584 oci.go:144] the created container "scheduled-stop-183854" has a running status.
	I0214 21:53:33.231897  412584 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20315-272800/.minikube/machines/scheduled-stop-183854/id_rsa...
	I0214 21:53:34.319786  412584 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20315-272800/.minikube/machines/scheduled-stop-183854/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0214 21:53:34.339671  412584 cli_runner.go:164] Run: docker container inspect scheduled-stop-183854 --format={{.State.Status}}
	I0214 21:53:34.356348  412584 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0214 21:53:34.356361  412584 kic_runner.go:114] Args: [docker exec --privileged scheduled-stop-183854 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0214 21:53:34.409537  412584 cli_runner.go:164] Run: docker container inspect scheduled-stop-183854 --format={{.State.Status}}
	I0214 21:53:34.426422  412584 machine.go:93] provisionDockerMachine start ...
	I0214 21:53:34.426518  412584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-183854
	I0214 21:53:34.444051  412584 main.go:141] libmachine: Using SSH client type: native
	I0214 21:53:34.444324  412584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x414ca0] 0x4174e0 <nil>  [] 0s} 127.0.0.1 33332 <nil> <nil>}
	I0214 21:53:34.444332  412584 main.go:141] libmachine: About to run SSH command:
	hostname
	I0214 21:53:34.570384  412584 main.go:141] libmachine: SSH cmd err, output: <nil>: scheduled-stop-183854
	
	I0214 21:53:34.570397  412584 ubuntu.go:169] provisioning hostname "scheduled-stop-183854"
	I0214 21:53:34.570458  412584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-183854
	I0214 21:53:34.586924  412584 main.go:141] libmachine: Using SSH client type: native
	I0214 21:53:34.587278  412584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x414ca0] 0x4174e0 <nil>  [] 0s} 127.0.0.1 33332 <nil> <nil>}
	I0214 21:53:34.587289  412584 main.go:141] libmachine: About to run SSH command:
	sudo hostname scheduled-stop-183854 && echo "scheduled-stop-183854" | sudo tee /etc/hostname
	I0214 21:53:34.726485  412584 main.go:141] libmachine: SSH cmd err, output: <nil>: scheduled-stop-183854
	
	I0214 21:53:34.726566  412584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-183854
	I0214 21:53:34.743489  412584 main.go:141] libmachine: Using SSH client type: native
	I0214 21:53:34.743722  412584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x414ca0] 0x4174e0 <nil>  [] 0s} 127.0.0.1 33332 <nil> <nil>}
	I0214 21:53:34.743739  412584 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sscheduled-stop-183854' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 scheduled-stop-183854/g' /etc/hosts;
				else 
					echo '127.0.1.1 scheduled-stop-183854' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0214 21:53:34.871141  412584 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0214 21:53:34.871159  412584 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20315-272800/.minikube CaCertPath:/home/jenkins/minikube-integration/20315-272800/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20315-272800/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20315-272800/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20315-272800/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20315-272800/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20315-272800/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20315-272800/.minikube}
	I0214 21:53:34.871181  412584 ubuntu.go:177] setting up certificates
	I0214 21:53:34.871196  412584 provision.go:84] configureAuth start
	I0214 21:53:34.871255  412584 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" scheduled-stop-183854
	I0214 21:53:34.888397  412584 provision.go:143] copyHostCerts
	I0214 21:53:34.888454  412584 exec_runner.go:144] found /home/jenkins/minikube-integration/20315-272800/.minikube/ca.pem, removing ...
	I0214 21:53:34.888461  412584 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20315-272800/.minikube/ca.pem
	I0214 21:53:34.888535  412584 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20315-272800/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20315-272800/.minikube/ca.pem (1082 bytes)
	I0214 21:53:34.888625  412584 exec_runner.go:144] found /home/jenkins/minikube-integration/20315-272800/.minikube/cert.pem, removing ...
	I0214 21:53:34.888629  412584 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20315-272800/.minikube/cert.pem
	I0214 21:53:34.888652  412584 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20315-272800/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20315-272800/.minikube/cert.pem (1123 bytes)
	I0214 21:53:34.888700  412584 exec_runner.go:144] found /home/jenkins/minikube-integration/20315-272800/.minikube/key.pem, removing ...
	I0214 21:53:34.888703  412584 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20315-272800/.minikube/key.pem
	I0214 21:53:34.888724  412584 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20315-272800/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20315-272800/.minikube/key.pem (1675 bytes)
	I0214 21:53:34.888768  412584 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20315-272800/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20315-272800/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20315-272800/.minikube/certs/ca-key.pem org=jenkins.scheduled-stop-183854 san=[127.0.0.1 192.168.76.2 localhost minikube scheduled-stop-183854]
	I0214 21:53:35.591801  412584 provision.go:177] copyRemoteCerts
	I0214 21:53:35.591859  412584 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0214 21:53:35.591902  412584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-183854
	I0214 21:53:35.608863  412584 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33332 SSHKeyPath:/home/jenkins/minikube-integration/20315-272800/.minikube/machines/scheduled-stop-183854/id_rsa Username:docker}
	I0214 21:53:35.704554  412584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-272800/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0214 21:53:35.729043  412584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-272800/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0214 21:53:35.752242  412584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-272800/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0214 21:53:35.775458  412584 provision.go:87] duration metric: took 904.240374ms to configureAuth
	I0214 21:53:35.775476  412584 ubuntu.go:193] setting minikube options for container-runtime
	I0214 21:53:35.775662  412584 config.go:182] Loaded profile config "scheduled-stop-183854": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0214 21:53:35.775769  412584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-183854
	I0214 21:53:35.792766  412584 main.go:141] libmachine: Using SSH client type: native
	I0214 21:53:35.793001  412584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x414ca0] 0x4174e0 <nil>  [] 0s} 127.0.0.1 33332 <nil> <nil>}
	I0214 21:53:35.793014  412584 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0214 21:53:36.030509  412584 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0214 21:53:36.030523  412584 machine.go:96] duration metric: took 1.604089745s to provisionDockerMachine
	I0214 21:53:36.030532  412584 client.go:171] duration metric: took 8.552927062s to LocalClient.Create
	I0214 21:53:36.030544  412584 start.go:167] duration metric: took 8.552976473s to libmachine.API.Create "scheduled-stop-183854"
	I0214 21:53:36.030550  412584 start.go:293] postStartSetup for "scheduled-stop-183854" (driver="docker")
	I0214 21:53:36.030560  412584 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0214 21:53:36.030626  412584 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0214 21:53:36.030680  412584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-183854
	I0214 21:53:36.050111  412584 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33332 SSHKeyPath:/home/jenkins/minikube-integration/20315-272800/.minikube/machines/scheduled-stop-183854/id_rsa Username:docker}
	I0214 21:53:36.144228  412584 ssh_runner.go:195] Run: cat /etc/os-release
	I0214 21:53:36.147493  412584 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0214 21:53:36.147526  412584 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0214 21:53:36.147535  412584 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0214 21:53:36.147541  412584 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0214 21:53:36.147550  412584 filesync.go:126] Scanning /home/jenkins/minikube-integration/20315-272800/.minikube/addons for local assets ...
	I0214 21:53:36.147608  412584 filesync.go:126] Scanning /home/jenkins/minikube-integration/20315-272800/.minikube/files for local assets ...
	I0214 21:53:36.147686  412584 filesync.go:149] local asset: /home/jenkins/minikube-integration/20315-272800/.minikube/files/etc/ssl/certs/2781862.pem -> 2781862.pem in /etc/ssl/certs
	I0214 21:53:36.147785  412584 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0214 21:53:36.156170  412584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-272800/.minikube/files/etc/ssl/certs/2781862.pem --> /etc/ssl/certs/2781862.pem (1708 bytes)
	I0214 21:53:36.180065  412584 start.go:296] duration metric: took 149.49653ms for postStartSetup
	I0214 21:53:36.180421  412584 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" scheduled-stop-183854
	I0214 21:53:36.197079  412584 profile.go:143] Saving config to /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/scheduled-stop-183854/config.json ...
	I0214 21:53:36.197362  412584 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0214 21:53:36.197403  412584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-183854
	I0214 21:53:36.215081  412584 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33332 SSHKeyPath:/home/jenkins/minikube-integration/20315-272800/.minikube/machines/scheduled-stop-183854/id_rsa Username:docker}
	I0214 21:53:36.307778  412584 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0214 21:53:36.312257  412584 start.go:128] duration metric: took 8.838309762s to createHost
	I0214 21:53:36.312272  412584 start.go:83] releasing machines lock for "scheduled-stop-183854", held for 8.838469167s
	I0214 21:53:36.312342  412584 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" scheduled-stop-183854
	I0214 21:53:36.328361  412584 ssh_runner.go:195] Run: cat /version.json
	I0214 21:53:36.328404  412584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-183854
	I0214 21:53:36.328405  412584 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0214 21:53:36.328464  412584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-183854
	I0214 21:53:36.351398  412584 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33332 SSHKeyPath:/home/jenkins/minikube-integration/20315-272800/.minikube/machines/scheduled-stop-183854/id_rsa Username:docker}
	I0214 21:53:36.360869  412584 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33332 SSHKeyPath:/home/jenkins/minikube-integration/20315-272800/.minikube/machines/scheduled-stop-183854/id_rsa Username:docker}
	I0214 21:53:36.446410  412584 ssh_runner.go:195] Run: systemctl --version
	I0214 21:53:36.576869  412584 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0214 21:53:36.719825  412584 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0214 21:53:36.723971  412584 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0214 21:53:36.744936  412584 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0214 21:53:36.745015  412584 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0214 21:53:36.777828  412584 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0214 21:53:36.777842  412584 start.go:495] detecting cgroup driver to use...
	I0214 21:53:36.777875  412584 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0214 21:53:36.777927  412584 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0214 21:53:36.793675  412584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0214 21:53:36.805745  412584 docker.go:217] disabling cri-docker service (if available) ...
	I0214 21:53:36.805813  412584 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0214 21:53:36.820797  412584 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0214 21:53:36.835504  412584 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0214 21:53:36.917012  412584 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0214 21:53:37.014111  412584 docker.go:233] disabling docker service ...
	I0214 21:53:37.014176  412584 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0214 21:53:37.041976  412584 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0214 21:53:37.053705  412584 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0214 21:53:37.150845  412584 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0214 21:53:37.246195  412584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0214 21:53:37.259037  412584 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0214 21:53:37.275846  412584 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0214 21:53:37.275904  412584 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 21:53:37.285953  412584 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0214 21:53:37.286023  412584 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 21:53:37.296095  412584 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 21:53:37.306265  412584 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 21:53:37.316207  412584 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0214 21:53:37.325097  412584 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 21:53:37.334742  412584 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 21:53:37.350336  412584 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 21:53:37.359882  412584 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0214 21:53:37.369002  412584 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0214 21:53:37.377480  412584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0214 21:53:37.466142  412584 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0214 21:53:37.577981  412584 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0214 21:53:37.578051  412584 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0214 21:53:37.581391  412584 start.go:563] Will wait 60s for crictl version
	I0214 21:53:37.581445  412584 ssh_runner.go:195] Run: which crictl
	I0214 21:53:37.585146  412584 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0214 21:53:37.627514  412584 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0214 21:53:37.627595  412584 ssh_runner.go:195] Run: crio --version
	I0214 21:53:37.668820  412584 ssh_runner.go:195] Run: crio --version
	I0214 21:53:37.709607  412584 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.24.6 ...
	I0214 21:53:37.712414  412584 cli_runner.go:164] Run: docker network inspect scheduled-stop-183854 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0214 21:53:37.728676  412584 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0214 21:53:37.732161  412584 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0214 21:53:37.742753  412584 kubeadm.go:875] updating cluster {Name:scheduled-stop-183854 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:scheduled-stop-183854 Namespace:default APIServerHAVIP: APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0214 21:53:37.742853  412584 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0214 21:53:37.742908  412584 ssh_runner.go:195] Run: sudo crictl images --output json
	I0214 21:53:37.817624  412584 crio.go:514] all images are preloaded for cri-o runtime.
	I0214 21:53:37.817636  412584 crio.go:433] Images already preloaded, skipping extraction
	I0214 21:53:37.817690  412584 ssh_runner.go:195] Run: sudo crictl images --output json
	I0214 21:53:37.860205  412584 crio.go:514] all images are preloaded for cri-o runtime.
	I0214 21:53:37.860218  412584 cache_images.go:84] Images are preloaded, skipping loading
	I0214 21:53:37.860224  412584 kubeadm.go:926] updating node { 192.168.76.2 8443 v1.32.1 crio true true} ...
	I0214 21:53:37.860313  412584 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=scheduled-stop-183854 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:scheduled-stop-183854 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0214 21:53:37.860389  412584 ssh_runner.go:195] Run: crio config
	I0214 21:53:37.913695  412584 cni.go:84] Creating CNI manager for ""
	I0214 21:53:37.913705  412584 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0214 21:53:37.913714  412584 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0214 21:53:37.913736  412584 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:scheduled-stop-183854 NodeName:scheduled-stop-183854 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0214 21:53:37.913877  412584 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "scheduled-stop-183854"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0214 21:53:37.913948  412584 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0214 21:53:37.923129  412584 binaries.go:44] Found k8s binaries, skipping transfer
	I0214 21:53:37.923197  412584 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0214 21:53:37.932070  412584 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (371 bytes)
	I0214 21:53:37.950651  412584 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0214 21:53:37.969021  412584 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I0214 21:53:37.986847  412584 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0214 21:53:37.990204  412584 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0214 21:53:38.000963  412584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0214 21:53:38.089285  412584 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0214 21:53:38.103963  412584 certs.go:68] Setting up /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/scheduled-stop-183854 for IP: 192.168.76.2
	I0214 21:53:38.103975  412584 certs.go:194] generating shared ca certs ...
	I0214 21:53:38.103989  412584 certs.go:226] acquiring lock for ca certs: {Name:mk331a8d0ee567d6460e2465c9b7c32324663cbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 21:53:38.104126  412584 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20315-272800/.minikube/ca.key
	I0214 21:53:38.104168  412584 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20315-272800/.minikube/proxy-client-ca.key
	I0214 21:53:38.104174  412584 certs.go:256] generating profile certs ...
	I0214 21:53:38.104231  412584 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/scheduled-stop-183854/client.key
	I0214 21:53:38.104241  412584 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/scheduled-stop-183854/client.crt with IP's: []
	I0214 21:53:39.525070  412584 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/scheduled-stop-183854/client.crt ...
	I0214 21:53:39.525089  412584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/scheduled-stop-183854/client.crt: {Name:mka693f1a073d42aab9b34c60228fe07dc8d5bc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 21:53:39.525304  412584 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/scheduled-stop-183854/client.key ...
	I0214 21:53:39.525312  412584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/scheduled-stop-183854/client.key: {Name:mk7aeaaa5f1bd86f6ed4ebea2f1da91ff7e3bee1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 21:53:39.525408  412584 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/scheduled-stop-183854/apiserver.key.218f927e
	I0214 21:53:39.525421  412584 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/scheduled-stop-183854/apiserver.crt.218f927e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I0214 21:53:40.002229  412584 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/scheduled-stop-183854/apiserver.crt.218f927e ...
	I0214 21:53:40.002248  412584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/scheduled-stop-183854/apiserver.crt.218f927e: {Name:mk70cdbc5645e35f4d7e0649372e8391d4176ac0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 21:53:40.002468  412584 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/scheduled-stop-183854/apiserver.key.218f927e ...
	I0214 21:53:40.002477  412584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/scheduled-stop-183854/apiserver.key.218f927e: {Name:mk99f6c45190702fdefce23a25adc92c25d2b6c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 21:53:40.002569  412584 certs.go:381] copying /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/scheduled-stop-183854/apiserver.crt.218f927e -> /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/scheduled-stop-183854/apiserver.crt
	I0214 21:53:40.002648  412584 certs.go:385] copying /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/scheduled-stop-183854/apiserver.key.218f927e -> /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/scheduled-stop-183854/apiserver.key
	I0214 21:53:40.002704  412584 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/scheduled-stop-183854/proxy-client.key
	I0214 21:53:40.002716  412584 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/scheduled-stop-183854/proxy-client.crt with IP's: []
	I0214 21:53:40.224566  412584 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/scheduled-stop-183854/proxy-client.crt ...
	I0214 21:53:40.224581  412584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/scheduled-stop-183854/proxy-client.crt: {Name:mk88648dd946ee09f8bfa40be5502bf6c2a10e75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 21:53:40.224770  412584 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/scheduled-stop-183854/proxy-client.key ...
	I0214 21:53:40.224778  412584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/scheduled-stop-183854/proxy-client.key: {Name:mk7f60a173e33ca553b7bd0eef11f7e1fb868902 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 21:53:40.224961  412584 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-272800/.minikube/certs/278186.pem (1338 bytes)
	W0214 21:53:40.224998  412584 certs.go:480] ignoring /home/jenkins/minikube-integration/20315-272800/.minikube/certs/278186_empty.pem, impossibly tiny 0 bytes
	I0214 21:53:40.225005  412584 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-272800/.minikube/certs/ca-key.pem (1679 bytes)
	I0214 21:53:40.225031  412584 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-272800/.minikube/certs/ca.pem (1082 bytes)
	I0214 21:53:40.225055  412584 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-272800/.minikube/certs/cert.pem (1123 bytes)
	I0214 21:53:40.225078  412584 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-272800/.minikube/certs/key.pem (1675 bytes)
	I0214 21:53:40.225118  412584 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-272800/.minikube/files/etc/ssl/certs/2781862.pem (1708 bytes)
	I0214 21:53:40.225715  412584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-272800/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0214 21:53:40.250005  412584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-272800/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0214 21:53:40.273093  412584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-272800/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0214 21:53:40.296922  412584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-272800/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0214 21:53:40.320188  412584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/scheduled-stop-183854/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0214 21:53:40.343311  412584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/scheduled-stop-183854/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0214 21:53:40.366328  412584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/scheduled-stop-183854/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0214 21:53:40.389503  412584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/scheduled-stop-183854/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0214 21:53:40.412613  412584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-272800/.minikube/files/etc/ssl/certs/2781862.pem --> /usr/share/ca-certificates/2781862.pem (1708 bytes)
	I0214 21:53:40.436884  412584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-272800/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0214 21:53:40.461398  412584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-272800/.minikube/certs/278186.pem --> /usr/share/ca-certificates/278186.pem (1338 bytes)
	I0214 21:53:40.485673  412584 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0214 21:53:40.511606  412584 ssh_runner.go:195] Run: openssl version
	I0214 21:53:40.517413  412584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2781862.pem && ln -fs /usr/share/ca-certificates/2781862.pem /etc/ssl/certs/2781862.pem"
	I0214 21:53:40.527968  412584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2781862.pem
	I0214 21:53:40.532020  412584 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 14 21:23 /usr/share/ca-certificates/2781862.pem
	I0214 21:53:40.532075  412584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2781862.pem
	I0214 21:53:40.539295  412584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2781862.pem /etc/ssl/certs/3ec20f2e.0"
	I0214 21:53:40.551983  412584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0214 21:53:40.561215  412584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0214 21:53:40.564762  412584 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 14 21:15 /usr/share/ca-certificates/minikubeCA.pem
	I0214 21:53:40.564824  412584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0214 21:53:40.571511  412584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0214 21:53:40.580582  412584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/278186.pem && ln -fs /usr/share/ca-certificates/278186.pem /etc/ssl/certs/278186.pem"
	I0214 21:53:40.589515  412584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/278186.pem
	I0214 21:53:40.592812  412584 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 14 21:23 /usr/share/ca-certificates/278186.pem
	I0214 21:53:40.592866  412584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/278186.pem
	I0214 21:53:40.599828  412584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/278186.pem /etc/ssl/certs/51391683.0"
	I0214 21:53:40.609181  412584 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0214 21:53:40.612389  412584 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0214 21:53:40.612439  412584 kubeadm.go:392] StartCluster: {Name:scheduled-stop-183854 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:scheduled-stop-183854 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0214 21:53:40.612509  412584 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0214 21:53:40.612565  412584 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0214 21:53:40.651680  412584 cri.go:89] found id: ""
	I0214 21:53:40.651744  412584 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0214 21:53:40.661009  412584 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0214 21:53:40.669924  412584 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0214 21:53:40.669987  412584 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0214 21:53:40.678710  412584 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0214 21:53:40.678720  412584 kubeadm.go:157] found existing configuration files:
	
	I0214 21:53:40.678775  412584 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0214 21:53:40.687648  412584 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0214 21:53:40.687722  412584 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0214 21:53:40.696254  412584 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0214 21:53:40.704769  412584 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0214 21:53:40.704822  412584 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0214 21:53:40.713132  412584 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0214 21:53:40.722111  412584 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0214 21:53:40.722168  412584 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0214 21:53:40.730775  412584 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0214 21:53:40.739879  412584 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0214 21:53:40.739940  412584 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0214 21:53:40.748621  412584 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0214 21:53:40.807556  412584 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0214 21:53:40.807795  412584 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1077-aws\n", err: exit status 1
	I0214 21:53:40.874796  412584 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0214 21:53:57.157627  412584 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0214 21:53:57.157733  412584 kubeadm.go:310] [preflight] Running pre-flight checks
	I0214 21:53:57.157916  412584 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0214 21:53:57.157997  412584 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1077-aws
	I0214 21:53:57.158038  412584 kubeadm.go:310] OS: Linux
	I0214 21:53:57.158114  412584 kubeadm.go:310] CGROUPS_CPU: enabled
	I0214 21:53:57.158167  412584 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0214 21:53:57.158214  412584 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0214 21:53:57.158261  412584 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0214 21:53:57.158308  412584 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0214 21:53:57.158354  412584 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0214 21:53:57.158398  412584 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0214 21:53:57.158444  412584 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0214 21:53:57.158489  412584 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0214 21:53:57.158560  412584 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0214 21:53:57.158652  412584 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0214 21:53:57.158740  412584 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0214 21:53:57.158801  412584 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0214 21:53:57.161825  412584 out.go:235]   - Generating certificates and keys ...
	I0214 21:53:57.161930  412584 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0214 21:53:57.161993  412584 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0214 21:53:57.162058  412584 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0214 21:53:57.162115  412584 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0214 21:53:57.162175  412584 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0214 21:53:57.162223  412584 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0214 21:53:57.162278  412584 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0214 21:53:57.162403  412584 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost scheduled-stop-183854] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0214 21:53:57.162456  412584 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0214 21:53:57.162576  412584 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost scheduled-stop-183854] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0214 21:53:57.162640  412584 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0214 21:53:57.162702  412584 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0214 21:53:57.162745  412584 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0214 21:53:57.162802  412584 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0214 21:53:57.162852  412584 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0214 21:53:57.162907  412584 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0214 21:53:57.162960  412584 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0214 21:53:57.163022  412584 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0214 21:53:57.163100  412584 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0214 21:53:57.163180  412584 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0214 21:53:57.163245  412584 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0214 21:53:57.168506  412584 out.go:235]   - Booting up control plane ...
	I0214 21:53:57.168654  412584 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0214 21:53:57.168744  412584 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0214 21:53:57.168816  412584 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0214 21:53:57.168942  412584 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0214 21:53:57.169041  412584 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0214 21:53:57.169089  412584 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0214 21:53:57.169245  412584 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0214 21:53:57.169366  412584 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0214 21:53:57.169447  412584 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.5016394s
	I0214 21:53:57.169524  412584 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0214 21:53:57.169581  412584 kubeadm.go:310] [api-check] The API server is healthy after 6.001402576s
	I0214 21:53:57.169714  412584 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0214 21:53:57.169867  412584 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0214 21:53:57.169936  412584 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0214 21:53:57.170155  412584 kubeadm.go:310] [mark-control-plane] Marking the node scheduled-stop-183854 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0214 21:53:57.170227  412584 kubeadm.go:310] [bootstrap-token] Using token: 64byck.a2l8lojrojqz6txh
	I0214 21:53:57.173057  412584 out.go:235]   - Configuring RBAC rules ...
	I0214 21:53:57.173214  412584 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0214 21:53:57.173296  412584 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0214 21:53:57.173470  412584 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0214 21:53:57.173612  412584 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0214 21:53:57.173726  412584 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0214 21:53:57.173817  412584 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0214 21:53:57.173929  412584 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0214 21:53:57.173978  412584 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0214 21:53:57.174028  412584 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0214 21:53:57.174031  412584 kubeadm.go:310] 
	I0214 21:53:57.174090  412584 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0214 21:53:57.174092  412584 kubeadm.go:310] 
	I0214 21:53:57.174167  412584 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0214 21:53:57.174174  412584 kubeadm.go:310] 
	I0214 21:53:57.174203  412584 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0214 21:53:57.174282  412584 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0214 21:53:57.174338  412584 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0214 21:53:57.174342  412584 kubeadm.go:310] 
	I0214 21:53:57.174397  412584 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0214 21:53:57.174400  412584 kubeadm.go:310] 
	I0214 21:53:57.174471  412584 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0214 21:53:57.174479  412584 kubeadm.go:310] 
	I0214 21:53:57.174539  412584 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0214 21:53:57.174617  412584 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0214 21:53:57.174696  412584 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0214 21:53:57.174703  412584 kubeadm.go:310] 
	I0214 21:53:57.174800  412584 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0214 21:53:57.174875  412584 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0214 21:53:57.174881  412584 kubeadm.go:310] 
	I0214 21:53:57.174963  412584 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 64byck.a2l8lojrojqz6txh \
	I0214 21:53:57.175277  412584 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c06c75d7df404df93ce031bbacdfe2f3cd0cfb1441a4d171159ff58cc3179696 \
	I0214 21:53:57.175298  412584 kubeadm.go:310] 	--control-plane 
	I0214 21:53:57.175301  412584 kubeadm.go:310] 
	I0214 21:53:57.175387  412584 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0214 21:53:57.175390  412584 kubeadm.go:310] 
	I0214 21:53:57.175475  412584 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 64byck.a2l8lojrojqz6txh \
	I0214 21:53:57.175607  412584 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c06c75d7df404df93ce031bbacdfe2f3cd0cfb1441a4d171159ff58cc3179696 
	I0214 21:53:57.175628  412584 cni.go:84] Creating CNI manager for ""
	I0214 21:53:57.175635  412584 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0214 21:53:57.180628  412584 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0214 21:53:57.183580  412584 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0214 21:53:57.187631  412584 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.1/kubectl ...
	I0214 21:53:57.187641  412584 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0214 21:53:57.206522  412584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0214 21:53:57.500875  412584 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0214 21:53:57.501005  412584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 21:53:57.501090  412584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes scheduled-stop-183854 minikube.k8s.io/updated_at=2025_02_14T21_53_57_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=fbfc2b66c4da8e6c2b2b8466356b2fbd8038ee5a minikube.k8s.io/name=scheduled-stop-183854 minikube.k8s.io/primary=true
	I0214 21:53:57.518584  412584 ops.go:34] apiserver oom_adj: -16
	I0214 21:53:57.634309  412584 kubeadm.go:1105] duration metric: took 133.351101ms to wait for elevateKubeSystemPrivileges
	I0214 21:53:57.669578  412584 kubeadm.go:394] duration metric: took 17.057134779s to StartCluster
	I0214 21:53:57.669602  412584 settings.go:142] acquiring lock: {Name:mkc0e41ab9ab5cb3c1dd458e58b0ec830c4e7cd4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 21:53:57.669661  412584 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20315-272800/kubeconfig
	I0214 21:53:57.670363  412584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-272800/kubeconfig: {Name:mke18ca9b25400737f047f62f0239cf4640d5a8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 21:53:57.670565  412584 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0214 21:53:57.670646  412584 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0214 21:53:57.670860  412584 config.go:182] Loaded profile config "scheduled-stop-183854": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0214 21:53:57.670900  412584 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0214 21:53:57.671009  412584 addons.go:69] Setting storage-provisioner=true in profile "scheduled-stop-183854"
	I0214 21:53:57.671022  412584 addons.go:238] Setting addon storage-provisioner=true in "scheduled-stop-183854"
	I0214 21:53:57.671027  412584 addons.go:69] Setting default-storageclass=true in profile "scheduled-stop-183854"
	I0214 21:53:57.671044  412584 host.go:66] Checking if "scheduled-stop-183854" exists ...
	I0214 21:53:57.671044  412584 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "scheduled-stop-183854"
	I0214 21:53:57.671416  412584 cli_runner.go:164] Run: docker container inspect scheduled-stop-183854 --format={{.State.Status}}
	I0214 21:53:57.671551  412584 cli_runner.go:164] Run: docker container inspect scheduled-stop-183854 --format={{.State.Status}}
	I0214 21:53:57.673829  412584 out.go:177] * Verifying Kubernetes components...
	I0214 21:53:57.679010  412584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0214 21:53:57.724351  412584 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0214 21:53:57.724712  412584 addons.go:238] Setting addon default-storageclass=true in "scheduled-stop-183854"
	I0214 21:53:57.724738  412584 host.go:66] Checking if "scheduled-stop-183854" exists ...
	I0214 21:53:57.725152  412584 cli_runner.go:164] Run: docker container inspect scheduled-stop-183854 --format={{.State.Status}}
	I0214 21:53:57.727336  412584 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0214 21:53:57.727348  412584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0214 21:53:57.727406  412584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-183854
	I0214 21:53:57.758512  412584 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33332 SSHKeyPath:/home/jenkins/minikube-integration/20315-272800/.minikube/machines/scheduled-stop-183854/id_rsa Username:docker}
	I0214 21:53:57.766406  412584 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0214 21:53:57.766418  412584 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0214 21:53:57.766480  412584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-183854
	I0214 21:53:57.798320  412584 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33332 SSHKeyPath:/home/jenkins/minikube-integration/20315-272800/.minikube/machines/scheduled-stop-183854/id_rsa Username:docker}
	I0214 21:53:57.932121  412584 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0214 21:53:57.932208  412584 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0214 21:53:57.952033  412584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0214 21:53:58.007527  412584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0214 21:53:58.347686  412584 api_server.go:52] waiting for apiserver process to appear ...
	I0214 21:53:58.347737  412584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:53:58.347802  412584 start.go:971] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I0214 21:53:58.540522  412584 api_server.go:72] duration metric: took 869.929282ms to wait for apiserver process to appear ...
	I0214 21:53:58.540531  412584 api_server.go:88] waiting for apiserver healthz status ...
	I0214 21:53:58.540557  412584 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0214 21:53:58.555194  412584 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0214 21:53:58.556374  412584 api_server.go:141] control plane version: v1.32.1
	I0214 21:53:58.556389  412584 api_server.go:131] duration metric: took 15.853028ms to wait for apiserver health ...
	I0214 21:53:58.556396  412584 system_pods.go:43] waiting for kube-system pods to appear ...
	I0214 21:53:58.558189  412584 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0214 21:53:58.559764  412584 system_pods.go:59] 5 kube-system pods found
	I0214 21:53:58.559784  412584 system_pods.go:61] "etcd-scheduled-stop-183854" [f9f48004-89f0-488d-bbd1-a7ac4a79b35d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0214 21:53:58.559792  412584 system_pods.go:61] "kube-apiserver-scheduled-stop-183854" [b4c39e05-f23a-4d21-8bb9-7c745efae8d6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0214 21:53:58.559799  412584 system_pods.go:61] "kube-controller-manager-scheduled-stop-183854" [d4c5e139-3723-4bcc-b425-df50df67faad] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0214 21:53:58.559805  412584 system_pods.go:61] "kube-scheduler-scheduled-stop-183854" [f40d795b-6fdd-4a45-95e0-78dcab20a693] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0214 21:53:58.559809  412584 system_pods.go:61] "storage-provisioner" [f5db0611-bd61-418a-b329-dc3a7548800c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0214 21:53:58.559814  412584 system_pods.go:74] duration metric: took 3.41299ms to wait for pod list to return data ...
	I0214 21:53:58.559824  412584 kubeadm.go:578] duration metric: took 889.237358ms to wait for: map[apiserver:true system_pods:true]
	I0214 21:53:58.559838  412584 node_conditions.go:102] verifying NodePressure condition ...
	I0214 21:53:58.561020  412584 addons.go:514] duration metric: took 890.120905ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0214 21:53:58.562330  412584 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0214 21:53:58.562346  412584 node_conditions.go:123] node cpu capacity is 2
	I0214 21:53:58.562356  412584 node_conditions.go:105] duration metric: took 2.514732ms to run NodePressure ...
	I0214 21:53:58.562367  412584 start.go:241] waiting for startup goroutines ...
	I0214 21:53:58.851236  412584 kapi.go:214] "coredns" deployment in "kube-system" namespace and "scheduled-stop-183854" context rescaled to 1 replicas
	I0214 21:53:58.851266  412584 start.go:246] waiting for cluster config update ...
	I0214 21:53:58.851277  412584 start.go:255] writing updated cluster config ...
	I0214 21:53:58.851614  412584 ssh_runner.go:195] Run: rm -f paused
	I0214 21:53:58.916321  412584 start.go:607] kubectl: 1.32.2, cluster: 1.32.1 (minor skew: 0)
	I0214 21:53:58.919510  412584 out.go:177] * Done! kubectl is now configured to use "scheduled-stop-183854" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Feb 14 21:53:49 scheduled-stop-183854 crio[992]: time="2025-02-14 21:53:49.914694256Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.32.1" id=618ff707-31ce-4e00-81a5-016d187832b6 name=/runtime.v1.ImageService/ImageStatus
	Feb 14 21:53:49 scheduled-stop-183854 crio[992]: time="2025-02-14 21:53:49.914830662Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ddb38cac617cb18802e09e448db4b3aa70e9e469b02defa76e6de7192847a71c,RepoTags:[registry.k8s.io/kube-scheduler:v1.32.1],RepoDigests:[registry.k8s.io/kube-scheduler@sha256:244bf1ea0194bd050a2408898d65b6a3259624fdd5a3541788b40b4e94c02fc1 registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e],Size_:68973892,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=618ff707-31ce-4e00-81a5-016d187832b6 name=/runtime.v1.ImageService/ImageStatus
	Feb 14 21:53:49 scheduled-stop-183854 crio[992]: time="2025-02-14 21:53:49.915557355Z" level=info msg="Creating container: kube-system/kube-scheduler-scheduled-stop-183854/kube-scheduler" id=0515fb34-e025-4993-a636-dc34650111b6 name=/runtime.v1.RuntimeService/CreateContainer
	Feb 14 21:53:49 scheduled-stop-183854 crio[992]: time="2025-02-14 21:53:49.915650112Z" level=warning msg="Allowed annotations are specified for workload []"
	Feb 14 21:53:49 scheduled-stop-183854 crio[992]: time="2025-02-14 21:53:49.917568413Z" level=info msg="Ran pod sandbox a3ad01c1f8aaefd4a521a85c06500806cbda38d42e7e99c1c61d3ab219535d1d with infra container: kube-system/etcd-scheduled-stop-183854/POD" id=cdb06249-18d6-4139-9d81-8bbf87dfe5b4 name=/runtime.v1.RuntimeService/RunPodSandbox
	Feb 14 21:53:49 scheduled-stop-183854 crio[992]: time="2025-02-14 21:53:49.918851998Z" level=info msg="Creating container: kube-system/kube-apiserver-scheduled-stop-183854/kube-apiserver" id=813dffd4-d207-4106-bf4b-b4b98b598616 name=/runtime.v1.RuntimeService/CreateContainer
	Feb 14 21:53:49 scheduled-stop-183854 crio[992]: time="2025-02-14 21:53:49.919022217Z" level=warning msg="Allowed annotations are specified for workload []"
	Feb 14 21:53:49 scheduled-stop-183854 crio[992]: time="2025-02-14 21:53:49.920121396Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.5.16-0" id=77e55be3-82b6-4ba6-b232-253026779b89 name=/runtime.v1.ImageService/ImageStatus
	Feb 14 21:53:49 scheduled-stop-183854 crio[992]: time="2025-02-14 21:53:49.926780153Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82,RepoTags:[registry.k8s.io/etcd:3.5.16-0],RepoDigests:[registry.k8s.io/etcd@sha256:313adb872febec3a5740969d5bf1b1df0d222f8fa06675f34db5a7fc437356a1 registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5],Size_:143226622,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=77e55be3-82b6-4ba6-b232-253026779b89 name=/runtime.v1.ImageService/ImageStatus
	Feb 14 21:53:49 scheduled-stop-183854 crio[992]: time="2025-02-14 21:53:49.927594162Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.5.16-0" id=1e60ac1c-9fa5-481d-b33e-0b2c8f2a6651 name=/runtime.v1.ImageService/ImageStatus
	Feb 14 21:53:49 scheduled-stop-183854 crio[992]: time="2025-02-14 21:53:49.927749900Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82,RepoTags:[registry.k8s.io/etcd:3.5.16-0],RepoDigests:[registry.k8s.io/etcd@sha256:313adb872febec3a5740969d5bf1b1df0d222f8fa06675f34db5a7fc437356a1 registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5],Size_:143226622,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=1e60ac1c-9fa5-481d-b33e-0b2c8f2a6651 name=/runtime.v1.ImageService/ImageStatus
	Feb 14 21:53:49 scheduled-stop-183854 crio[992]: time="2025-02-14 21:53:49.928368156Z" level=info msg="Creating container: kube-system/etcd-scheduled-stop-183854/etcd" id=dc281eba-8c2a-47be-8a36-88717a086e08 name=/runtime.v1.RuntimeService/CreateContainer
	Feb 14 21:53:49 scheduled-stop-183854 crio[992]: time="2025-02-14 21:53:49.928453135Z" level=warning msg="Allowed annotations are specified for workload []"
	Feb 14 21:53:50 scheduled-stop-183854 crio[992]: time="2025-02-14 21:53:50.037114349Z" level=info msg="Created container 16f31d9b918d45d0e3a6424fc447f6adfffce70f08fb6d92d73964ad5e868ec8: kube-system/kube-scheduler-scheduled-stop-183854/kube-scheduler" id=0515fb34-e025-4993-a636-dc34650111b6 name=/runtime.v1.RuntimeService/CreateContainer
	Feb 14 21:53:50 scheduled-stop-183854 crio[992]: time="2025-02-14 21:53:50.039007157Z" level=info msg="Starting container: 16f31d9b918d45d0e3a6424fc447f6adfffce70f08fb6d92d73964ad5e868ec8" id=7c904e54-55ab-4e80-b6b3-d8380b8555a5 name=/runtime.v1.RuntimeService/StartContainer
	Feb 14 21:53:50 scheduled-stop-183854 crio[992]: time="2025-02-14 21:53:50.054618673Z" level=info msg="Started container" PID=1379 containerID=16f31d9b918d45d0e3a6424fc447f6adfffce70f08fb6d92d73964ad5e868ec8 description=kube-system/kube-scheduler-scheduled-stop-183854/kube-scheduler id=7c904e54-55ab-4e80-b6b3-d8380b8555a5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=138730d0427b11a480f2b5212cd854fc4cd9435f0661fb5d3de777beaa05f5c4
	Feb 14 21:53:50 scheduled-stop-183854 crio[992]: time="2025-02-14 21:53:50.099400794Z" level=info msg="Created container 0f6899fc1c38657fa12caa3cef31879d2860c5ba6cb7217d091b393a095a8e47: kube-system/kube-apiserver-scheduled-stop-183854/kube-apiserver" id=813dffd4-d207-4106-bf4b-b4b98b598616 name=/runtime.v1.RuntimeService/CreateContainer
	Feb 14 21:53:50 scheduled-stop-183854 crio[992]: time="2025-02-14 21:53:50.100115451Z" level=info msg="Starting container: 0f6899fc1c38657fa12caa3cef31879d2860c5ba6cb7217d091b393a095a8e47" id=6330f6d6-4025-4186-8b00-4c61cfd28da6 name=/runtime.v1.RuntimeService/StartContainer
	Feb 14 21:53:50 scheduled-stop-183854 crio[992]: time="2025-02-14 21:53:50.107748072Z" level=info msg="Started container" PID=1432 containerID=0f6899fc1c38657fa12caa3cef31879d2860c5ba6cb7217d091b393a095a8e47 description=kube-system/kube-apiserver-scheduled-stop-183854/kube-apiserver id=6330f6d6-4025-4186-8b00-4c61cfd28da6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=32aaa4fd73ee5b40c759d7fcecdaafda8beabcd4640a74e2c68a7b5155a4f396
	Feb 14 21:53:50 scheduled-stop-183854 crio[992]: time="2025-02-14 21:53:50.117845582Z" level=info msg="Created container 57369d49f2e95f0ebf828f83066c65b8852699042e954faf9547a4e72dfcece7: kube-system/kube-controller-manager-scheduled-stop-183854/kube-controller-manager" id=a4b6bb7e-838c-4e0f-9363-7a7bf1eea1e4 name=/runtime.v1.RuntimeService/CreateContainer
	Feb 14 21:53:50 scheduled-stop-183854 crio[992]: time="2025-02-14 21:53:50.118567188Z" level=info msg="Starting container: 57369d49f2e95f0ebf828f83066c65b8852699042e954faf9547a4e72dfcece7" id=d98c2a95-c3d2-4c74-bc55-0a91f0daf403 name=/runtime.v1.RuntimeService/StartContainer
	Feb 14 21:53:50 scheduled-stop-183854 crio[992]: time="2025-02-14 21:53:50.136346590Z" level=info msg="Started container" PID=1437 containerID=57369d49f2e95f0ebf828f83066c65b8852699042e954faf9547a4e72dfcece7 description=kube-system/kube-controller-manager-scheduled-stop-183854/kube-controller-manager id=d98c2a95-c3d2-4c74-bc55-0a91f0daf403 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d096193585a54509ddfe87b340bfa6733ed3c96b2dec29d5c80d0510bbc470f5
	Feb 14 21:53:50 scheduled-stop-183854 crio[992]: time="2025-02-14 21:53:50.139236468Z" level=info msg="Created container fc67460d5bf8a62020c32f1f404c89d262400cc3b534135a913ad2bc1e208ef9: kube-system/etcd-scheduled-stop-183854/etcd" id=dc281eba-8c2a-47be-8a36-88717a086e08 name=/runtime.v1.RuntimeService/CreateContainer
	Feb 14 21:53:50 scheduled-stop-183854 crio[992]: time="2025-02-14 21:53:50.139937110Z" level=info msg="Starting container: fc67460d5bf8a62020c32f1f404c89d262400cc3b534135a913ad2bc1e208ef9" id=4e32cddb-ee01-4ed7-92f7-d004345a7e5b name=/runtime.v1.RuntimeService/StartContainer
	Feb 14 21:53:50 scheduled-stop-183854 crio[992]: time="2025-02-14 21:53:50.153066787Z" level=info msg="Started container" PID=1398 containerID=fc67460d5bf8a62020c32f1f404c89d262400cc3b534135a913ad2bc1e208ef9 description=kube-system/etcd-scheduled-stop-183854/etcd id=4e32cddb-ee01-4ed7-92f7-d004345a7e5b name=/runtime.v1.RuntimeService/StartContainer sandboxID=a3ad01c1f8aaefd4a521a85c06500806cbda38d42e7e99c1c61d3ab219535d1d
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	57369d49f2e95       2933761aa7adae93679cdde1c0bf457bd4dc4b53f95fc066a4c50aa9c375ea13   10 seconds ago      Running             kube-controller-manager   0                   d096193585a54       kube-controller-manager-scheduled-stop-183854
	0f6899fc1c386       265c2dedf28ab9b88c7910c1643e210ad62483867f2bab88f56919a6e49a0d19   10 seconds ago      Running             kube-apiserver            0                   32aaa4fd73ee5       kube-apiserver-scheduled-stop-183854
	fc67460d5bf8a       7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82   10 seconds ago      Running             etcd                      0                   a3ad01c1f8aae       etcd-scheduled-stop-183854
	16f31d9b918d4       ddb38cac617cb18802e09e448db4b3aa70e9e469b02defa76e6de7192847a71c   10 seconds ago      Running             kube-scheduler            0                   138730d0427b1       kube-scheduler-scheduled-stop-183854
	
	
	==> describe nodes <==
	Name:               scheduled-stop-183854
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=scheduled-stop-183854
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fbfc2b66c4da8e6c2b2b8466356b2fbd8038ee5a
	                    minikube.k8s.io/name=scheduled-stop-183854
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_02_14T21_53_57_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 14 Feb 2025 21:53:53 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  scheduled-stop-183854
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 14 Feb 2025 21:53:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 14 Feb 2025 21:53:56 +0000   Fri, 14 Feb 2025 21:53:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 14 Feb 2025 21:53:56 +0000   Fri, 14 Feb 2025 21:53:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 14 Feb 2025 21:53:56 +0000   Fri, 14 Feb 2025 21:53:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 14 Feb 2025 21:53:56 +0000   Fri, 14 Feb 2025 21:53:50 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    scheduled-stop-183854
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 b070e7f4a0ab43c1986e86f8210f00cf
	  System UUID:                4a49f1dd-a673-48e9-9d93-5167162c3c20
	  Boot ID:                    e73e80e8-f4f5-4b6f-baaf-c79d4b748ea0
	  Kernel Version:             5.15.0-1077-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                             CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                             ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-scheduled-stop-183854                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         4s
	  kube-system                 kube-apiserver-scheduled-stop-183854             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4s
	  kube-system                 kube-controller-manager-scheduled-stop-183854    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4s
	  kube-system                 kube-scheduler-scheduled-stop-183854             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%)  0 (0%)
	  memory             100Mi (1%)  0 (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   NodeHasSufficientMemory  11s (x8 over 11s)  kubelet          Node scheduled-stop-183854 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11s (x8 over 11s)  kubelet          Node scheduled-stop-183854 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11s (x8 over 11s)  kubelet          Node scheduled-stop-183854 status is now: NodeHasSufficientPID
	  Normal   Starting                 4s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 4s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  4s                 kubelet          Node scheduled-stop-183854 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4s                 kubelet          Node scheduled-stop-183854 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4s                 kubelet          Node scheduled-stop-183854 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           0s                 node-controller  Node scheduled-stop-183854 event: Registered Node scheduled-stop-183854 in Controller
	
	
	==> dmesg <==
	[Feb14 20:18] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Feb14 20:49] overlayfs: '/var/lib/containers/storage/overlay/l/ZLTOCNGE2IGM6DT7VP2QP7OV3M' not a directory
	[  +1.296116] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	[Feb14 21:53] hrtimer: interrupt took 13394048 ns
	
	
	==> etcd [fc67460d5bf8a62020c32f1f404c89d262400cc3b534135a913ad2bc1e208ef9] <==
	{"level":"info","ts":"2025-02-14T21:53:50.276850Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-02-14T21:53:50.277144Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-02-14T21:53:50.278732Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-02-14T21:53:50.279128Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-02-14T21:53:50.283073Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-02-14T21:53:50.434872Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2025-02-14T21:53:50.435000Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-02-14T21:53:50.435081Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2025-02-14T21:53:50.435134Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2025-02-14T21:53:50.435175Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-02-14T21:53:50.435210Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2025-02-14T21:53:50.435248Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-02-14T21:53:50.448883Z","caller":"etcdserver/server.go:2651","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-02-14T21:53:50.455343Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:scheduled-stop-183854 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-02-14T21:53:50.455436Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-14T21:53:50.458840Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-14T21:53:50.457416Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-02-14T21:53:50.459254Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-02-14T21:53:50.459323Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-02-14T21:53:50.459806Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-02-14T21:53:50.460531Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-02-14T21:53:50.467607Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-02-14T21:53:50.468354Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-02-14T21:53:50.468922Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-02-14T21:53:50.468976Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 21:54:00 up  2:36,  0 users,  load average: 1.72, 1.87, 2.22
	Linux scheduled-stop-183854 5.15.0-1077-aws #84~20.04.1-Ubuntu SMP Mon Jan 20 22:14:27 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [0f6899fc1c38657fa12caa3cef31879d2860c5ba6cb7217d091b393a095a8e47] <==
	I0214 21:53:53.840938       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0214 21:53:53.841075       1 aggregator.go:171] initial CRD sync complete...
	I0214 21:53:53.841089       1 autoregister_controller.go:144] Starting autoregister controller
	I0214 21:53:53.841095       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0214 21:53:53.841101       1 cache.go:39] Caches are synced for autoregister controller
	I0214 21:53:53.842266       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0214 21:53:53.842296       1 policy_source.go:240] refreshing policies
	I0214 21:53:53.867556       1 controller.go:615] quota admission added evaluator for: namespaces
	E0214 21:53:53.872059       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	E0214 21:53:53.890017       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I0214 21:53:54.081660       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0214 21:53:54.554331       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0214 21:53:54.560855       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0214 21:53:54.560877       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0214 21:53:55.272935       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0214 21:53:55.323230       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0214 21:53:55.457220       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0214 21:53:55.467745       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0214 21:53:55.468955       1 controller.go:615] quota admission added evaluator for: endpoints
	I0214 21:53:55.474763       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0214 21:53:55.764742       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0214 21:53:56.560302       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0214 21:53:56.575891       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0214 21:53:56.595821       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0214 21:54:00.411523       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [57369d49f2e95f0ebf828f83066c65b8852699042e954faf9547a4e72dfcece7] <==
	I0214 21:54:00.344588       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0214 21:54:00.347154       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0214 21:54:00.347606       1 shared_informer.go:320] Caches are synced for GC
	I0214 21:54:00.350505       1 shared_informer.go:320] Caches are synced for garbage collector
	I0214 21:54:00.352722       1 shared_informer.go:320] Caches are synced for service account
	I0214 21:54:00.360001       1 shared_informer.go:320] Caches are synced for job
	I0214 21:54:00.360154       1 shared_informer.go:320] Caches are synced for namespace
	I0214 21:54:00.369367       1 shared_informer.go:320] Caches are synced for resource quota
	I0214 21:54:00.369607       1 shared_informer.go:320] Caches are synced for expand
	I0214 21:54:00.371901       1 shared_informer.go:320] Caches are synced for crt configmap
	I0214 21:54:00.377215       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0214 21:54:00.378549       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0214 21:54:00.386220       1 shared_informer.go:320] Caches are synced for TTL
	I0214 21:54:00.395181       1 shared_informer.go:320] Caches are synced for daemon sets
	I0214 21:54:00.395326       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0214 21:54:00.395433       1 shared_informer.go:320] Caches are synced for taint
	I0214 21:54:00.395547       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0214 21:54:00.395656       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="scheduled-stop-183854"
	I0214 21:54:00.395762       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0214 21:54:00.401527       1 shared_informer.go:320] Caches are synced for attach detach
	I0214 21:54:00.416236       1 shared_informer.go:320] Caches are synced for resource quota
	I0214 21:54:00.419394       1 shared_informer.go:320] Caches are synced for garbage collector
	I0214 21:54:00.419427       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0214 21:54:00.419437       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0214 21:54:00.419920       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	
	
	==> kube-scheduler [16f31d9b918d45d0e3a6424fc447f6adfffce70f08fb6d92d73964ad5e868ec8] <==
	W0214 21:53:54.080599       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0214 21:53:54.081001       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0214 21:53:54.084951       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0214 21:53:54.085045       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0214 21:53:54.085128       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0214 21:53:54.085170       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0214 21:53:54.085310       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0214 21:53:54.085361       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0214 21:53:54.085442       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0214 21:53:54.085485       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0214 21:53:54.085582       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0214 21:53:54.085649       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0214 21:53:54.085749       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0214 21:53:54.085792       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0214 21:53:54.085964       1 reflector.go:569] runtime/asm_arm64.s:1223: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0214 21:53:54.086016       1 reflector.go:166] "Unhandled Error" err="runtime/asm_arm64.s:1223: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0214 21:53:54.086108       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0214 21:53:54.086151       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0214 21:53:54.088659       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0214 21:53:54.088747       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0214 21:53:54.088858       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0214 21:53:54.088927       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0214 21:53:54.089033       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0214 21:53:54.089085       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0214 21:53:55.374633       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 14 21:53:56 scheduled-stop-183854 kubelet[1552]: I0214 21:53:56.882414    1552 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9f1181bd58a74f09652ef0e969b308fd-flexvolume-dir\") pod \"kube-controller-manager-scheduled-stop-183854\" (UID: \"9f1181bd58a74f09652ef0e969b308fd\") " pod="kube-system/kube-controller-manager-scheduled-stop-183854"
	Feb 14 21:53:56 scheduled-stop-183854 kubelet[1552]: I0214 21:53:56.882444    1552 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/5d462be9f800f028e7762fe357ef8345-etcd-data\") pod \"etcd-scheduled-stop-183854\" (UID: \"5d462be9f800f028e7762fe357ef8345\") " pod="kube-system/etcd-scheduled-stop-183854"
	Feb 14 21:53:56 scheduled-stop-183854 kubelet[1552]: I0214 21:53:56.882465    1552 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f3db425b9c257b876a20b5330294cb33-usr-local-share-ca-certificates\") pod \"kube-apiserver-scheduled-stop-183854\" (UID: \"f3db425b9c257b876a20b5330294cb33\") " pod="kube-system/kube-apiserver-scheduled-stop-183854"
	Feb 14 21:53:56 scheduled-stop-183854 kubelet[1552]: I0214 21:53:56.882508    1552 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9f1181bd58a74f09652ef0e969b308fd-k8s-certs\") pod \"kube-controller-manager-scheduled-stop-183854\" (UID: \"9f1181bd58a74f09652ef0e969b308fd\") " pod="kube-system/kube-controller-manager-scheduled-stop-183854"
	Feb 14 21:53:56 scheduled-stop-183854 kubelet[1552]: I0214 21:53:56.882532    1552 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f3db425b9c257b876a20b5330294cb33-usr-share-ca-certificates\") pod \"kube-apiserver-scheduled-stop-183854\" (UID: \"f3db425b9c257b876a20b5330294cb33\") " pod="kube-system/kube-apiserver-scheduled-stop-183854"
	Feb 14 21:53:56 scheduled-stop-183854 kubelet[1552]: I0214 21:53:56.882566    1552 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/609f1aaa4928082cd09fce12723f31e4-kubeconfig\") pod \"kube-scheduler-scheduled-stop-183854\" (UID: \"609f1aaa4928082cd09fce12723f31e4\") " pod="kube-system/kube-scheduler-scheduled-stop-183854"
	Feb 14 21:53:56 scheduled-stop-183854 kubelet[1552]: I0214 21:53:56.882587    1552 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9f1181bd58a74f09652ef0e969b308fd-usr-local-share-ca-certificates\") pod \"kube-controller-manager-scheduled-stop-183854\" (UID: \"9f1181bd58a74f09652ef0e969b308fd\") " pod="kube-system/kube-controller-manager-scheduled-stop-183854"
	Feb 14 21:53:56 scheduled-stop-183854 kubelet[1552]: I0214 21:53:56.882620    1552 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/5d462be9f800f028e7762fe357ef8345-etcd-certs\") pod \"etcd-scheduled-stop-183854\" (UID: \"5d462be9f800f028e7762fe357ef8345\") " pod="kube-system/etcd-scheduled-stop-183854"
	Feb 14 21:53:56 scheduled-stop-183854 kubelet[1552]: I0214 21:53:56.882653    1552 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f3db425b9c257b876a20b5330294cb33-ca-certs\") pod \"kube-apiserver-scheduled-stop-183854\" (UID: \"f3db425b9c257b876a20b5330294cb33\") " pod="kube-system/kube-apiserver-scheduled-stop-183854"
	Feb 14 21:53:56 scheduled-stop-183854 kubelet[1552]: I0214 21:53:56.882676    1552 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f3db425b9c257b876a20b5330294cb33-etc-ca-certificates\") pod \"kube-apiserver-scheduled-stop-183854\" (UID: \"f3db425b9c257b876a20b5330294cb33\") " pod="kube-system/kube-apiserver-scheduled-stop-183854"
	Feb 14 21:53:56 scheduled-stop-183854 kubelet[1552]: I0214 21:53:56.882696    1552 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9f1181bd58a74f09652ef0e969b308fd-kubeconfig\") pod \"kube-controller-manager-scheduled-stop-183854\" (UID: \"9f1181bd58a74f09652ef0e969b308fd\") " pod="kube-system/kube-controller-manager-scheduled-stop-183854"
	Feb 14 21:53:57 scheduled-stop-183854 kubelet[1552]: I0214 21:53:57.463699    1552 apiserver.go:52] "Watching apiserver"
	Feb 14 21:53:57 scheduled-stop-183854 kubelet[1552]: I0214 21:53:57.479537    1552 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Feb 14 21:53:57 scheduled-stop-183854 kubelet[1552]: I0214 21:53:57.596034    1552 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-scheduled-stop-183854"
	Feb 14 21:53:57 scheduled-stop-183854 kubelet[1552]: I0214 21:53:57.596346    1552 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-scheduled-stop-183854"
	Feb 14 21:53:57 scheduled-stop-183854 kubelet[1552]: I0214 21:53:57.596761    1552 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-scheduled-stop-183854"
	Feb 14 21:53:57 scheduled-stop-183854 kubelet[1552]: I0214 21:53:57.627461    1552 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-scheduled-stop-183854" podStartSLOduration=1.6274408120000001 podStartE2EDuration="1.627440812s" podCreationTimestamp="2025-02-14 21:53:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-14 21:53:57.581396438 +0000 UTC m=+1.205931851" watchObservedRunningTime="2025-02-14 21:53:57.627440812 +0000 UTC m=+1.251976225"
	Feb 14 21:53:57 scheduled-stop-183854 kubelet[1552]: E0214 21:53:57.652932    1552 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-scheduled-stop-183854\" already exists" pod="kube-system/etcd-scheduled-stop-183854"
	Feb 14 21:53:57 scheduled-stop-183854 kubelet[1552]: E0214 21:53:57.653180    1552 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-scheduled-stop-183854\" already exists" pod="kube-system/kube-apiserver-scheduled-stop-183854"
	Feb 14 21:53:57 scheduled-stop-183854 kubelet[1552]: E0214 21:53:57.653306    1552 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-scheduled-stop-183854\" already exists" pod="kube-system/kube-scheduler-scheduled-stop-183854"
	Feb 14 21:53:57 scheduled-stop-183854 kubelet[1552]: I0214 21:53:57.660285    1552 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-scheduled-stop-183854" podStartSLOduration=1.66026418 podStartE2EDuration="1.66026418s" podCreationTimestamp="2025-02-14 21:53:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-14 21:53:57.62819701 +0000 UTC m=+1.252732431" watchObservedRunningTime="2025-02-14 21:53:57.66026418 +0000 UTC m=+1.284799593"
	Feb 14 21:53:57 scheduled-stop-183854 kubelet[1552]: I0214 21:53:57.694559    1552 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-scheduled-stop-183854" podStartSLOduration=1.6945371599999999 podStartE2EDuration="1.69453716s" podCreationTimestamp="2025-02-14 21:53:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-14 21:53:57.660758215 +0000 UTC m=+1.285293636" watchObservedRunningTime="2025-02-14 21:53:57.69453716 +0000 UTC m=+1.319072581"
	Feb 14 21:53:57 scheduled-stop-183854 kubelet[1552]: I0214 21:53:57.728714    1552 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-scheduled-stop-183854" podStartSLOduration=1.7286917590000002 podStartE2EDuration="1.728691759s" podCreationTimestamp="2025-02-14 21:53:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-14 21:53:57.697394358 +0000 UTC m=+1.321929779" watchObservedRunningTime="2025-02-14 21:53:57.728691759 +0000 UTC m=+1.353227172"
	Feb 14 21:54:00 scheduled-stop-183854 kubelet[1552]: I0214 21:54:00.437699    1552 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Feb 14 21:54:00 scheduled-stop-183854 kubelet[1552]: I0214 21:54:00.438968    1552 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p scheduled-stop-183854 -n scheduled-stop-183854
helpers_test.go:261: (dbg) Run:  kubectl --context scheduled-stop-183854 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: coredns-668d6bf9bc-dn7m4 kindnet-94v2g kube-proxy-zq5dw storage-provisioner
helpers_test.go:274: ======> post-mortem[TestScheduledStopUnix]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context scheduled-stop-183854 describe pod coredns-668d6bf9bc-dn7m4 kindnet-94v2g kube-proxy-zq5dw storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context scheduled-stop-183854 describe pod coredns-668d6bf9bc-dn7m4 kindnet-94v2g kube-proxy-zq5dw storage-provisioner: exit status 1 (109.31596ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-668d6bf9bc-dn7m4" not found
	Error from server (NotFound): pods "kindnet-94v2g" not found
	Error from server (NotFound): pods "kube-proxy-zq5dw" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context scheduled-stop-183854 describe pod coredns-668d6bf9bc-dn7m4 kindnet-94v2g kube-proxy-zq5dw storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "scheduled-stop-183854" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-183854
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-183854: (2.047042379s)
--- FAIL: TestScheduledStopUnix (36.84s)

                                                
                                    

Test pass (296/331)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 6.23
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.22
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.32.1/json-events 5.44
13 TestDownloadOnly/v1.32.1/preload-exists 0
17 TestDownloadOnly/v1.32.1/LogsDuration 0.1
18 TestDownloadOnly/v1.32.1/DeleteAll 0.22
19 TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.6
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.09
27 TestAddons/Setup 188.94
31 TestAddons/serial/GCPAuth/Namespaces 0.2
32 TestAddons/serial/GCPAuth/FakeCredentials 10.99
35 TestAddons/parallel/Registry 18.79
37 TestAddons/parallel/InspektorGadget 12.09
38 TestAddons/parallel/MetricsServer 6.84
40 TestAddons/parallel/CSI 51.64
41 TestAddons/parallel/Headlamp 16.76
42 TestAddons/parallel/CloudSpanner 5.59
43 TestAddons/parallel/LocalPath 53.61
44 TestAddons/parallel/NvidiaDevicePlugin 6.57
45 TestAddons/parallel/Yakd 11.8
47 TestAddons/StoppedEnableDisable 12.15
48 TestCertOptions 41.13
49 TestCertExpiration 248.46
51 TestForceSystemdFlag 48.9
52 TestForceSystemdEnv 43.22
58 TestErrorSpam/setup 31.19
59 TestErrorSpam/start 0.78
60 TestErrorSpam/status 1.19
61 TestErrorSpam/pause 1.79
62 TestErrorSpam/unpause 1.82
63 TestErrorSpam/stop 1.46
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 51.28
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 27.37
70 TestFunctional/serial/KubeContext 0.07
71 TestFunctional/serial/KubectlGetPods 0.09
74 TestFunctional/serial/CacheCmd/cache/add_remote 4.45
75 TestFunctional/serial/CacheCmd/cache/add_local 1.4
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.32
79 TestFunctional/serial/CacheCmd/cache/cache_reload 2.24
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.14
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
83 TestFunctional/serial/ExtraConfig 31.56
84 TestFunctional/serial/ComponentHealth 0.11
85 TestFunctional/serial/LogsCmd 1.73
86 TestFunctional/serial/LogsFileCmd 1.74
87 TestFunctional/serial/InvalidService 4.68
89 TestFunctional/parallel/ConfigCmd 0.48
90 TestFunctional/parallel/DashboardCmd 41.46
91 TestFunctional/parallel/DryRun 0.51
92 TestFunctional/parallel/InternationalLanguage 0.22
93 TestFunctional/parallel/StatusCmd 1.06
97 TestFunctional/parallel/ServiceCmdConnect 13.63
98 TestFunctional/parallel/AddonsCmd 0.16
101 TestFunctional/parallel/SSHCmd 0.69
102 TestFunctional/parallel/CpCmd 2.01
104 TestFunctional/parallel/FileSync 0.37
105 TestFunctional/parallel/CertSync 2.13
109 TestFunctional/parallel/NodeLabels 0.09
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.71
113 TestFunctional/parallel/License 0.35
114 TestFunctional/parallel/Version/short 0.07
115 TestFunctional/parallel/Version/components 1.14
116 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
117 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
118 TestFunctional/parallel/ImageCommands/ImageListJson 0.25
119 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
120 TestFunctional/parallel/ImageCommands/ImageBuild 3.56
121 TestFunctional/parallel/ImageCommands/Setup 0.83
122 TestFunctional/parallel/UpdateContextCmd/no_changes 0.15
123 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.15
124 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.15
125 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.72
126 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.07
127 TestFunctional/parallel/ProfileCmd/profile_not_create 0.53
128 TestFunctional/parallel/ProfileCmd/profile_list 0.53
129 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.46
130 TestFunctional/parallel/ProfileCmd/profile_json_output 0.53
131 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.65
133 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.63
134 TestFunctional/parallel/ImageCommands/ImageRemove 0.83
135 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
137 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.47
138 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.86
139 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.97
140 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.09
141 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
145 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
146 TestFunctional/parallel/ServiceCmd/DeployApp 7.23
147 TestFunctional/parallel/ServiceCmd/List 0.5
148 TestFunctional/parallel/ServiceCmd/JSONOutput 0.51
149 TestFunctional/parallel/ServiceCmd/HTTPS 0.46
150 TestFunctional/parallel/ServiceCmd/Format 0.47
151 TestFunctional/parallel/ServiceCmd/URL 0.39
152 TestFunctional/parallel/MountCmd/any-port 25.82
153 TestFunctional/parallel/MountCmd/specific-port 2.07
154 TestFunctional/parallel/MountCmd/VerifyCleanup 2.14
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 176.65
163 TestMultiControlPlane/serial/DeployApp 8.44
164 TestMultiControlPlane/serial/PingHostFromPods 1.8
165 TestMultiControlPlane/serial/AddWorkerNode 28.86
166 TestMultiControlPlane/serial/NodeLabels 0.11
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.98
168 TestMultiControlPlane/serial/CopyFile 19.41
169 TestMultiControlPlane/serial/StopSecondaryNode 12.75
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.82
171 TestMultiControlPlane/serial/RestartSecondaryNode 32.76
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.4
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 140.03
174 TestMultiControlPlane/serial/DeleteSecondaryNode 11.74
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.79
176 TestMultiControlPlane/serial/StopCluster 35.72
177 TestMultiControlPlane/serial/RestartCluster 114.52
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.78
179 TestMultiControlPlane/serial/AddSecondaryNode 68.89
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.06
184 TestJSONOutput/start/Command 48.58
185 TestJSONOutput/start/Audit 0
187 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/pause/Command 0.76
191 TestJSONOutput/pause/Audit 0
193 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/unpause/Command 0.66
197 TestJSONOutput/unpause/Audit 0
199 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/stop/Command 5.83
203 TestJSONOutput/stop/Audit 0
205 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
207 TestErrorJSONOutput 0.26
209 TestKicCustomNetwork/create_custom_network 39.6
210 TestKicCustomNetwork/use_default_bridge_network 34.28
211 TestKicExistingNetwork 32.34
212 TestKicCustomSubnet 36.98
213 TestKicStaticIP 34.96
214 TestMainNoArgs 0.06
215 TestMinikubeProfile 67.82
218 TestMountStart/serial/StartWithMountFirst 6.95
219 TestMountStart/serial/VerifyMountFirst 0.26
220 TestMountStart/serial/StartWithMountSecond 7.03
221 TestMountStart/serial/VerifyMountSecond 0.28
222 TestMountStart/serial/DeleteFirst 1.64
223 TestMountStart/serial/VerifyMountPostDelete 0.27
224 TestMountStart/serial/Stop 1.2
225 TestMountStart/serial/RestartStopped 7.75
226 TestMountStart/serial/VerifyMountPostStop 0.26
229 TestMultiNode/serial/FreshStart2Nodes 81.42
230 TestMultiNode/serial/DeployApp2Nodes 6.41
231 TestMultiNode/serial/PingHostFrom2Pods 1.05
232 TestMultiNode/serial/AddNode 28.02
233 TestMultiNode/serial/MultiNodeLabels 0.09
234 TestMultiNode/serial/ProfileList 0.71
235 TestMultiNode/serial/CopyFile 10.04
236 TestMultiNode/serial/StopNode 2.3
237 TestMultiNode/serial/StartAfterStop 7.78
238 TestMultiNode/serial/RestartKeepsNodes 78.73
239 TestMultiNode/serial/DeleteNode 5.51
240 TestMultiNode/serial/StopMultiNode 23.83
241 TestMultiNode/serial/RestartMultiNode 52.45
242 TestMultiNode/serial/ValidateNameConflict 34.37
247 TestPreload 134.06
252 TestInsufficientStorage 11.34
253 TestRunningBinaryUpgrade 131.98
255 TestKubernetesUpgrade 408.02
256 TestMissingContainerUpgrade 110.12
258 TestPause/serial/Start 63.86
260 TestNoKubernetes/serial/StartNoK8sWithVersion 0.12
261 TestNoKubernetes/serial/StartWithK8s 40.33
262 TestNoKubernetes/serial/StartWithStopK8s 18.84
263 TestNoKubernetes/serial/Start 8.92
264 TestPause/serial/SecondStartNoReconfiguration 24.55
265 TestNoKubernetes/serial/VerifyK8sNotRunning 0.26
266 TestNoKubernetes/serial/ProfileList 0.97
267 TestNoKubernetes/serial/Stop 1.23
268 TestNoKubernetes/serial/StartNoArgs 8.02
269 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.43
277 TestNetworkPlugins/group/false 4.07
278 TestPause/serial/Pause 0.94
282 TestPause/serial/VerifyStatus 0.45
283 TestPause/serial/Unpause 0.85
284 TestPause/serial/PauseAgain 1.11
285 TestPause/serial/DeletePaused 3.04
286 TestPause/serial/VerifyDeletedResources 0.17
287 TestStoppedBinaryUpgrade/Setup 0.79
288 TestStoppedBinaryUpgrade/Upgrade 122.05
289 TestStoppedBinaryUpgrade/MinikubeLogs 1.16
297 TestNetworkPlugins/group/auto/Start 62.6
298 TestNetworkPlugins/group/auto/KubeletFlags 0.3
299 TestNetworkPlugins/group/auto/NetCatPod 11.34
300 TestNetworkPlugins/group/auto/DNS 0.21
301 TestNetworkPlugins/group/auto/Localhost 0.15
302 TestNetworkPlugins/group/auto/HairPin 0.18
303 TestNetworkPlugins/group/kindnet/Start 55.61
304 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
305 TestNetworkPlugins/group/kindnet/KubeletFlags 0.29
306 TestNetworkPlugins/group/kindnet/NetCatPod 11.36
307 TestNetworkPlugins/group/kindnet/DNS 0.17
308 TestNetworkPlugins/group/kindnet/Localhost 0.15
309 TestNetworkPlugins/group/kindnet/HairPin 0.15
310 TestNetworkPlugins/group/calico/Start 66.67
311 TestNetworkPlugins/group/custom-flannel/Start 64.71
312 TestNetworkPlugins/group/calico/ControllerPod 6.01
313 TestNetworkPlugins/group/calico/KubeletFlags 0.42
314 TestNetworkPlugins/group/calico/NetCatPod 12.39
315 TestNetworkPlugins/group/calico/DNS 0.31
316 TestNetworkPlugins/group/calico/Localhost 0.31
317 TestNetworkPlugins/group/calico/HairPin 0.32
318 TestNetworkPlugins/group/enable-default-cni/Start 75.75
319 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.37
320 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.34
321 TestNetworkPlugins/group/custom-flannel/DNS 0.23
322 TestNetworkPlugins/group/custom-flannel/Localhost 0.21
323 TestNetworkPlugins/group/custom-flannel/HairPin 0.19
324 TestNetworkPlugins/group/flannel/Start 45.37
325 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.35
326 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.32
327 TestNetworkPlugins/group/flannel/ControllerPod 6
328 TestNetworkPlugins/group/enable-default-cni/DNS 0.24
329 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
330 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
331 TestNetworkPlugins/group/flannel/KubeletFlags 0.3
332 TestNetworkPlugins/group/flannel/NetCatPod 11.26
333 TestNetworkPlugins/group/flannel/DNS 0.23
334 TestNetworkPlugins/group/flannel/Localhost 0.21
335 TestNetworkPlugins/group/flannel/HairPin 0.2
336 TestNetworkPlugins/group/bridge/Start 79.32
338 TestStartStop/group/old-k8s-version/serial/FirstStart 150.22
339 TestNetworkPlugins/group/bridge/KubeletFlags 0.34
340 TestNetworkPlugins/group/bridge/NetCatPod 10.3
341 TestNetworkPlugins/group/bridge/DNS 0.2
342 TestNetworkPlugins/group/bridge/Localhost 0.16
343 TestNetworkPlugins/group/bridge/HairPin 0.15
345 TestStartStop/group/no-preload/serial/FirstStart 63.59
346 TestStartStop/group/old-k8s-version/serial/DeployApp 10.52
347 TestStartStop/group/no-preload/serial/DeployApp 10.31
348 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.14
349 TestStartStop/group/old-k8s-version/serial/Stop 12.03
350 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.06
351 TestStartStop/group/no-preload/serial/Stop 11.94
352 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
353 TestStartStop/group/old-k8s-version/serial/SecondStart 115.29
354 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.24
355 TestStartStop/group/no-preload/serial/SecondStart 53.05
356 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
357 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
358 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.3
359 TestStartStop/group/no-preload/serial/Pause 3.21
361 TestStartStop/group/embed-certs/serial/FirstStart 50.04
362 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
363 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.13
364 TestStartStop/group/embed-certs/serial/DeployApp 10.48
365 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.29
366 TestStartStop/group/old-k8s-version/serial/Pause 3.02
368 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 59.75
369 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.85
370 TestStartStop/group/embed-certs/serial/Stop 12.16
371 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.3
372 TestStartStop/group/embed-certs/serial/SecondStart 56.73
373 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.39
374 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.17
375 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.02
376 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
377 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.11
378 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
379 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 54.01
380 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
381 TestStartStop/group/embed-certs/serial/Pause 3.68
383 TestStartStop/group/newest-cni/serial/FirstStart 39.29
384 TestStartStop/group/newest-cni/serial/DeployApp 0
385 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.48
386 TestStartStop/group/newest-cni/serial/Stop 1.78
387 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.25
388 TestStartStop/group/newest-cni/serial/SecondStart 17.43
389 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
390 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.19
391 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.4
392 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.9
393 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
394 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
395 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.44
396 TestStartStop/group/newest-cni/serial/Pause 3.86
x
+
TestDownloadOnly/v1.20.0/json-events (6.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-174775 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-174775 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (6.228407429s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (6.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0214 21:14:28.191596  278186 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0214 21:14:28.191716  278186 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20315-272800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-174775
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-174775: exit status 85 (98.910154ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-174775 | jenkins | v1.35.0 | 14 Feb 25 21:14 UTC |          |
	|         | -p download-only-174775        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/14 21:14:22
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0214 21:14:22.014999  278191 out.go:345] Setting OutFile to fd 1 ...
	I0214 21:14:22.015164  278191 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0214 21:14:22.015176  278191 out.go:358] Setting ErrFile to fd 2...
	I0214 21:14:22.015181  278191 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0214 21:14:22.015448  278191 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20315-272800/.minikube/bin
	W0214 21:14:22.015603  278191 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20315-272800/.minikube/config/config.json: open /home/jenkins/minikube-integration/20315-272800/.minikube/config/config.json: no such file or directory
	I0214 21:14:22.016018  278191 out.go:352] Setting JSON to true
	I0214 21:14:22.016919  278191 start.go:130] hostinfo: {"hostname":"ip-172-31-21-244","uptime":7009,"bootTime":1739560653,"procs":154,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1077-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0214 21:14:22.017003  278191 start.go:140] virtualization:  
	I0214 21:14:22.021244  278191 out.go:97] [download-only-174775] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	W0214 21:14:22.021433  278191 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20315-272800/.minikube/cache/preloaded-tarball: no such file or directory
	I0214 21:14:22.021538  278191 notify.go:220] Checking for updates...
	I0214 21:14:22.024486  278191 out.go:169] MINIKUBE_LOCATION=20315
	I0214 21:14:22.027445  278191 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0214 21:14:22.030453  278191 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20315-272800/kubeconfig
	I0214 21:14:22.033461  278191 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20315-272800/.minikube
	I0214 21:14:22.036406  278191 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0214 21:14:22.042105  278191 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0214 21:14:22.042452  278191 driver.go:394] Setting default libvirt URI to qemu:///system
	I0214 21:14:22.068518  278191 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0214 21:14:22.068631  278191 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0214 21:14:22.126138  278191 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:54 SystemTime:2025-02-14 21:14:22.117357538 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0214 21:14:22.126257  278191 docker.go:318] overlay module found
	I0214 21:14:22.129247  278191 out.go:97] Using the docker driver based on user configuration
	I0214 21:14:22.129278  278191 start.go:304] selected driver: docker
	I0214 21:14:22.129285  278191 start.go:908] validating driver "docker" against <nil>
	I0214 21:14:22.129380  278191 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0214 21:14:22.188187  278191 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:54 SystemTime:2025-02-14 21:14:22.179493421 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0214 21:14:22.188428  278191 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0214 21:14:22.188714  278191 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0214 21:14:22.188876  278191 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0214 21:14:22.192014  278191 out.go:169] Using Docker driver with root privileges
	I0214 21:14:22.194760  278191 cni.go:84] Creating CNI manager for ""
	I0214 21:14:22.194825  278191 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0214 21:14:22.194845  278191 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0214 21:14:22.194930  278191 start.go:347] cluster config:
	{Name:download-only-174775 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-174775 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0214 21:14:22.197913  278191 out.go:97] Starting "download-only-174775" primary control-plane node in "download-only-174775" cluster
	I0214 21:14:22.197935  278191 cache.go:121] Beginning downloading kic base image for docker with crio
	I0214 21:14:22.200841  278191 out.go:97] Pulling base image v0.0.46-1739182054-20387 ...
	I0214 21:14:22.200871  278191 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0214 21:14:22.200976  278191 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad in local docker daemon
	I0214 21:14:22.217992  278191 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad to local cache
	I0214 21:14:22.218815  278191 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad in local cache directory
	I0214 21:14:22.218933  278191 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad to local cache
	I0214 21:14:22.270395  278191 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I0214 21:14:22.270424  278191 cache.go:56] Caching tarball of preloaded images
	I0214 21:14:22.271251  278191 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0214 21:14:22.274574  278191 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0214 21:14:22.274594  278191 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I0214 21:14:22.359874  278191 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:59cd2ef07b53f039bfd1761b921f2a02 -> /home/jenkins/minikube-integration/20315-272800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I0214 21:14:26.467743  278191 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I0214 21:14:26.467904  278191 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20315-272800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-174775 host does not exist
	  To start a cluster, run: "minikube start -p download-only-174775"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-174775
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/json-events (5.44s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-596516 --force --alsologtostderr --kubernetes-version=v1.32.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-596516 --force --alsologtostderr --kubernetes-version=v1.32.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.440613164s)
--- PASS: TestDownloadOnly/v1.32.1/json-events (5.44s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/preload-exists
I0214 21:14:34.105543  278186 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
I0214 21:14:34.105585  278186 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20315-272800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-596516
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-596516: exit status 85 (95.08972ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-174775 | jenkins | v1.35.0 | 14 Feb 25 21:14 UTC |                     |
	|         | -p download-only-174775        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.35.0 | 14 Feb 25 21:14 UTC | 14 Feb 25 21:14 UTC |
	| delete  | -p download-only-174775        | download-only-174775 | jenkins | v1.35.0 | 14 Feb 25 21:14 UTC | 14 Feb 25 21:14 UTC |
	| start   | -o=json --download-only        | download-only-596516 | jenkins | v1.35.0 | 14 Feb 25 21:14 UTC |                     |
	|         | -p download-only-596516        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/14 21:14:28
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0214 21:14:28.707893  278393 out.go:345] Setting OutFile to fd 1 ...
	I0214 21:14:28.708028  278393 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0214 21:14:28.708046  278393 out.go:358] Setting ErrFile to fd 2...
	I0214 21:14:28.708052  278393 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0214 21:14:28.708402  278393 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20315-272800/.minikube/bin
	I0214 21:14:28.708898  278393 out.go:352] Setting JSON to true
	I0214 21:14:28.709753  278393 start.go:130] hostinfo: {"hostname":"ip-172-31-21-244","uptime":7016,"bootTime":1739560653,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1077-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0214 21:14:28.709850  278393 start.go:140] virtualization:  
	I0214 21:14:28.713543  278393 out.go:97] [download-only-596516] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0214 21:14:28.713788  278393 notify.go:220] Checking for updates...
	I0214 21:14:28.717452  278393 out.go:169] MINIKUBE_LOCATION=20315
	I0214 21:14:28.720442  278393 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0214 21:14:28.723272  278393 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20315-272800/kubeconfig
	I0214 21:14:28.726196  278393 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20315-272800/.minikube
	I0214 21:14:28.729230  278393 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0214 21:14:28.734981  278393 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0214 21:14:28.735313  278393 driver.go:394] Setting default libvirt URI to qemu:///system
	I0214 21:14:28.772976  278393 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0214 21:14:28.773093  278393 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0214 21:14:28.830751  278393 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-02-14 21:14:28.821146517 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0214 21:14:28.830865  278393 docker.go:318] overlay module found
	I0214 21:14:28.833883  278393 out.go:97] Using the docker driver based on user configuration
	I0214 21:14:28.833919  278393 start.go:304] selected driver: docker
	I0214 21:14:28.833928  278393 start.go:908] validating driver "docker" against <nil>
	I0214 21:14:28.834055  278393 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0214 21:14:28.884783  278393 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-02-14 21:14:28.876063682 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0214 21:14:28.885003  278393 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0214 21:14:28.885290  278393 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0214 21:14:28.885449  278393 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0214 21:14:28.888585  278393 out.go:169] Using Docker driver with root privileges
	I0214 21:14:28.891448  278393 cni.go:84] Creating CNI manager for ""
	I0214 21:14:28.891522  278393 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0214 21:14:28.891538  278393 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0214 21:14:28.891646  278393 start.go:347] cluster config:
	{Name:download-only-596516 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:download-only-596516 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0214 21:14:28.894716  278393 out.go:97] Starting "download-only-596516" primary control-plane node in "download-only-596516" cluster
	I0214 21:14:28.894753  278393 cache.go:121] Beginning downloading kic base image for docker with crio
	I0214 21:14:28.897702  278393 out.go:97] Pulling base image v0.0.46-1739182054-20387 ...
	I0214 21:14:28.897751  278393 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0214 21:14:28.897842  278393 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad in local docker daemon
	I0214 21:14:28.913531  278393 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad to local cache
	I0214 21:14:28.913673  278393 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad in local cache directory
	I0214 21:14:28.913693  278393 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad in local cache directory, skipping pull
	I0214 21:14:28.913698  278393 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad exists in cache, skipping pull
	I0214 21:14:28.913706  278393 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad as a tarball
	I0214 21:14:28.970997  278393 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.1/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-arm64.tar.lz4
	I0214 21:14:28.971040  278393 cache.go:56] Caching tarball of preloaded images
	I0214 21:14:28.971244  278393 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0214 21:14:28.974430  278393 out.go:97] Downloading Kubernetes v1.32.1 preload ...
	I0214 21:14:28.974466  278393 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-arm64.tar.lz4 ...
	I0214 21:14:29.055540  278393 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.1/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-arm64.tar.lz4?checksum=md5:2975fc7b8b3f798b17cd470734f6f7e1 -> /home/jenkins/minikube-integration/20315-272800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-arm64.tar.lz4
	I0214 21:14:32.557727  278393 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-arm64.tar.lz4 ...
	I0214 21:14:32.557831  278393 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20315-272800/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-arm64.tar.lz4 ...
	I0214 21:14:33.435386  278393 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0214 21:14:33.435784  278393 profile.go:143] Saving config to /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/download-only-596516/config.json ...
	I0214 21:14:33.435819  278393 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/download-only-596516/config.json: {Name:mk3ad52b6324e5d68ce4a5df66c6f2a7552d9961 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 21:14:33.436025  278393 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0214 21:14:33.436181  278393 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/20315-272800/.minikube/cache/linux/arm64/v1.32.1/kubectl
	
	
	* The control-plane node download-only-596516 host does not exist
	  To start a cluster, run: "minikube start -p download-only-596516"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.1/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.32.1/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-596516
--- PASS: TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
I0214 21:14:35.439133  278186 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-741244 --alsologtostderr --binary-mirror http://127.0.0.1:38683 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-741244" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-741244
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-794492
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-794492: exit status 85 (75.42036ms)

                                                
                                                
-- stdout --
	* Profile "addons-794492" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-794492"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-794492
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-794492: exit status 85 (94.046451ms)

                                                
                                                
-- stdout --
	* Profile "addons-794492" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-794492"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/Setup (188.94s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-arm64 start -p addons-794492 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-arm64 start -p addons-794492 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m8.939762458s)
--- PASS: TestAddons/Setup (188.94s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.2s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-794492 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-794492 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.20s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.99s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-794492 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-794492 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [464e16fb-f9c8-4a0b-b20c-b1aa1188d268] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [464e16fb-f9c8-4a0b-b20c-b1aa1188d268] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.004367731s
addons_test.go:633: (dbg) Run:  kubectl --context addons-794492 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-794492 describe sa gcp-auth-test
addons_test.go:659: (dbg) Run:  kubectl --context addons-794492 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:683: (dbg) Run:  kubectl --context addons-794492 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.99s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.79s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 17.472618ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c88467877-4ntpr" [c896f503-30c4-4427-b501-d736eb1a7d4f] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.00328193s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-lvdl4" [2e950ccc-848b-4461-bebb-8258a3ed7a24] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.002880722s
addons_test.go:331: (dbg) Run:  kubectl --context addons-794492 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-794492 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-794492 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.802931652s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-arm64 -p addons-794492 ip
2025/02/14 21:18:23 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-794492 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (18.79s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.09s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-g6fj7" [df2c97fe-a17b-47ad-8127-a38f6feeb000] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004939006s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-794492 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-794492 addons disable inspektor-gadget --alsologtostderr -v=1: (6.085170149s)
--- PASS: TestAddons/parallel/InspektorGadget (12.09s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.84s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 8.161302ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-mww6g" [6b44cf41-f1cb-4c1b-b2b3-ef3f5d40d6f0] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.006687799s
addons_test.go:402: (dbg) Run:  kubectl --context addons-794492 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-794492 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.84s)

                                                
                                    
x
+
TestAddons/parallel/CSI (51.64s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0214 21:18:46.131992  278186 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0214 21:18:46.137189  278186 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0214 21:18:46.137224  278186 kapi.go:107] duration metric: took 8.251063ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 8.263379ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-794492 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-794492 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-794492 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-794492 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-794492 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-794492 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-794492 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-794492 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-794492 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-794492 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-794492 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-794492 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-794492 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [a8888be2-0be4-410f-9c88-8e0aca848bfb] Pending
helpers_test.go:344: "task-pv-pod" [a8888be2-0be4-410f-9c88-8e0aca848bfb] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [a8888be2-0be4-410f-9c88-8e0aca848bfb] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.00306979s
addons_test.go:511: (dbg) Run:  kubectl --context addons-794492 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-794492 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-794492 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-794492 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-794492 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-794492 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-794492 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-794492 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-794492 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-794492 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-794492 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-794492 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-794492 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-794492 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-794492 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-794492 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-794492 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [5cd7b073-c2c1-4c6b-a915-c8a8adc5be12] Pending
helpers_test.go:344: "task-pv-pod-restore" [5cd7b073-c2c1-4c6b-a915-c8a8adc5be12] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [5cd7b073-c2c1-4c6b-a915-c8a8adc5be12] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003172086s
addons_test.go:553: (dbg) Run:  kubectl --context addons-794492 delete pod task-pv-pod-restore
addons_test.go:553: (dbg) Done: kubectl --context addons-794492 delete pod task-pv-pod-restore: (1.134190182s)
addons_test.go:557: (dbg) Run:  kubectl --context addons-794492 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-794492 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-794492 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-794492 addons disable volumesnapshots --alsologtostderr -v=1: (1.205249734s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-794492 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-794492 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.924842768s)
--- PASS: TestAddons/parallel/CSI (51.64s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.76s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-794492 --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5d4b5d7bd6-gfs6t" [f737ada9-cf46-42f9-8118-f53b82146566] Pending
helpers_test.go:344: "headlamp-5d4b5d7bd6-gfs6t" [f737ada9-cf46-42f9-8118-f53b82146566] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5d4b5d7bd6-gfs6t" [f737ada9-cf46-42f9-8118-f53b82146566] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.003339764s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-794492 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-794492 addons disable headlamp --alsologtostderr -v=1: (5.783496218s)
--- PASS: TestAddons/parallel/Headlamp (16.76s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.59s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5d76cffbc-9fm7m" [31f84fa9-7e5e-416c-be2a-a214cf212211] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003179741s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-794492 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.59s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.61s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-794492 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-794492 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-794492 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-794492 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-794492 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-794492 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-794492 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-794492 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [8dda8914-e0d5-4459-9267-31662fde0d17] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [8dda8914-e0d5-4459-9267-31662fde0d17] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [8dda8914-e0d5-4459-9267-31662fde0d17] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003713129s
addons_test.go:906: (dbg) Run:  kubectl --context addons-794492 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-arm64 -p addons-794492 ssh "cat /opt/local-path-provisioner/pvc-ef979e62-d684-495c-80b9-afcdaf8e6967_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-794492 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-794492 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-794492 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-794492 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.436109478s)
--- PASS: TestAddons/parallel/LocalPath (53.61s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.57s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-8f2xh" [508118fd-f3b6-4f76-819c-6fe2f0fd0e81] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003538332s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-794492 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.57s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.8s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-lbxks" [3769f067-3920-40f3-b048-143ba9a77236] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003416897s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-794492 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-794492 addons disable yakd --alsologtostderr -v=1: (5.792986975s)
--- PASS: TestAddons/parallel/Yakd (11.80s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.15s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-794492
addons_test.go:170: (dbg) Done: out/minikube-linux-arm64 stop -p addons-794492: (11.864852054s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-794492
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-794492
addons_test.go:183: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-794492
--- PASS: TestAddons/StoppedEnableDisable (12.15s)

                                                
                                    
x
+
TestCertOptions (41.13s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-111432 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-111432 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (38.404071401s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-111432 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-111432 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-111432 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-111432" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-111432
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-111432: (2.00741677s)
--- PASS: TestCertOptions (41.13s)

                                                
                                    
x
+
TestCertExpiration (248.46s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-298060 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-298060 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (38.263484483s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-298060 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-298060 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (27.166171734s)
helpers_test.go:175: Cleaning up "cert-expiration-298060" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-298060
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-298060: (3.028095676s)
--- PASS: TestCertExpiration (248.46s)

                                                
                                    
x
+
TestForceSystemdFlag (48.9s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-080082 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-080082 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (43.309129201s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-080082 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-080082" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-080082
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-080082: (5.157260202s)
--- PASS: TestForceSystemdFlag (48.90s)

                                                
                                    
x
+
TestForceSystemdEnv (43.22s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-874807 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-874807 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (40.460503265s)
helpers_test.go:175: Cleaning up "force-systemd-env-874807" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-874807
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-874807: (2.75766475s)
--- PASS: TestForceSystemdEnv (43.22s)

                                                
                                    
x
+
TestErrorSpam/setup (31.19s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-204307 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-204307 --driver=docker  --container-runtime=crio
E0214 21:22:45.971103  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/addons-794492/client.crt: no such file or directory" logger="UnhandledError"
E0214 21:22:45.977484  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/addons-794492/client.crt: no such file or directory" logger="UnhandledError"
E0214 21:22:45.988867  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/addons-794492/client.crt: no such file or directory" logger="UnhandledError"
E0214 21:22:46.011773  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/addons-794492/client.crt: no such file or directory" logger="UnhandledError"
E0214 21:22:46.053210  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/addons-794492/client.crt: no such file or directory" logger="UnhandledError"
E0214 21:22:46.134629  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/addons-794492/client.crt: no such file or directory" logger="UnhandledError"
E0214 21:22:46.296070  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/addons-794492/client.crt: no such file or directory" logger="UnhandledError"
E0214 21:22:46.617709  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/addons-794492/client.crt: no such file or directory" logger="UnhandledError"
E0214 21:22:47.259734  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/addons-794492/client.crt: no such file or directory" logger="UnhandledError"
E0214 21:22:48.541128  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/addons-794492/client.crt: no such file or directory" logger="UnhandledError"
E0214 21:22:51.102552  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/addons-794492/client.crt: no such file or directory" logger="UnhandledError"
E0214 21:22:56.224609  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/addons-794492/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-204307 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-204307 --driver=docker  --container-runtime=crio: (31.184939334s)
--- PASS: TestErrorSpam/setup (31.19s)

                                                
                                    
x
+
TestErrorSpam/start (0.78s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-204307 --log_dir /tmp/nospam-204307 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-204307 --log_dir /tmp/nospam-204307 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-204307 --log_dir /tmp/nospam-204307 start --dry-run
--- PASS: TestErrorSpam/start (0.78s)

                                                
                                    
x
+
TestErrorSpam/status (1.19s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-204307 --log_dir /tmp/nospam-204307 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-204307 --log_dir /tmp/nospam-204307 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-204307 --log_dir /tmp/nospam-204307 status
--- PASS: TestErrorSpam/status (1.19s)

                                                
                                    
x
+
TestErrorSpam/pause (1.79s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-204307 --log_dir /tmp/nospam-204307 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-204307 --log_dir /tmp/nospam-204307 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-204307 --log_dir /tmp/nospam-204307 pause
--- PASS: TestErrorSpam/pause (1.79s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.82s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-204307 --log_dir /tmp/nospam-204307 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-204307 --log_dir /tmp/nospam-204307 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-204307 --log_dir /tmp/nospam-204307 unpause
--- PASS: TestErrorSpam/unpause (1.82s)

                                                
                                    
x
+
TestErrorSpam/stop (1.46s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-204307 --log_dir /tmp/nospam-204307 stop
E0214 21:23:06.465926  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/addons-794492/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-204307 --log_dir /tmp/nospam-204307 stop: (1.249739285s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-204307 --log_dir /tmp/nospam-204307 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-204307 --log_dir /tmp/nospam-204307 stop
--- PASS: TestErrorSpam/stop (1.46s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1872: local sync path: /home/jenkins/minikube-integration/20315-272800/.minikube/files/etc/test/nested/copy/278186/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (51.28s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2251: (dbg) Run:  out/minikube-linux-arm64 start -p functional-264648 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E0214 21:23:26.948201  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/addons-794492/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2251: (dbg) Done: out/minikube-linux-arm64 start -p functional-264648 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (51.276148272s)
--- PASS: TestFunctional/serial/StartWithProxy (51.28s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (27.37s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0214 21:24:04.197907  278186 config.go:182] Loaded profile config "functional-264648": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
functional_test.go:676: (dbg) Run:  out/minikube-linux-arm64 start -p functional-264648 --alsologtostderr -v=8
E0214 21:24:07.909667  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/addons-794492/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:676: (dbg) Done: out/minikube-linux-arm64 start -p functional-264648 --alsologtostderr -v=8: (27.362131865s)
functional_test.go:680: soft start took 27.370467627s for "functional-264648" cluster.
I0214 21:24:31.560407  278186 config.go:182] Loaded profile config "functional-264648": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestFunctional/serial/SoftStart (27.37s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:698: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:713: (dbg) Run:  kubectl --context functional-264648 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.45s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1066: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 cache add registry.k8s.io/pause:3.1
functional_test.go:1066: (dbg) Done: out/minikube-linux-arm64 -p functional-264648 cache add registry.k8s.io/pause:3.1: (1.535977011s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 cache add registry.k8s.io/pause:3.3
functional_test.go:1066: (dbg) Done: out/minikube-linux-arm64 -p functional-264648 cache add registry.k8s.io/pause:3.3: (1.483230847s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 cache add registry.k8s.io/pause:latest
functional_test.go:1066: (dbg) Done: out/minikube-linux-arm64 -p functional-264648 cache add registry.k8s.io/pause:latest: (1.428344387s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.45s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.4s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1094: (dbg) Run:  docker build -t minikube-local-cache-test:functional-264648 /tmp/TestFunctionalserialCacheCmdcacheadd_local1390037960/001
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 cache add minikube-local-cache-test:functional-264648
functional_test.go:1111: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 cache delete minikube-local-cache-test:functional-264648
functional_test.go:1100: (dbg) Run:  docker rmi minikube-local-cache-test:functional-264648
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.40s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1119: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1127: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1141: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1164: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-264648 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (302.483424ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1175: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 cache reload
functional_test.go:1175: (dbg) Done: out/minikube-linux-arm64 -p functional-264648 cache reload: (1.289057184s)
functional_test.go:1180: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1189: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1189: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:733: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 kubectl -- --context functional-264648 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:758: (dbg) Run:  out/kubectl --context functional-264648 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (31.56s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:774: (dbg) Run:  out/minikube-linux-arm64 start -p functional-264648 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:774: (dbg) Done: out/minikube-linux-arm64 start -p functional-264648 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (31.562363856s)
functional_test.go:778: restart took 31.56247363s for "functional-264648" cluster.
I0214 21:25:12.215858  278186 config.go:182] Loaded profile config "functional-264648": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestFunctional/serial/ExtraConfig (31.56s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:827: (dbg) Run:  kubectl --context functional-264648 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:842: etcd phase: Running
functional_test.go:852: etcd status: Ready
functional_test.go:842: kube-apiserver phase: Running
functional_test.go:852: kube-apiserver status: Ready
functional_test.go:842: kube-controller-manager phase: Running
functional_test.go:852: kube-controller-manager status: Ready
functional_test.go:842: kube-scheduler phase: Running
functional_test.go:852: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.73s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1253: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 logs
functional_test.go:1253: (dbg) Done: out/minikube-linux-arm64 -p functional-264648 logs: (1.728546899s)
--- PASS: TestFunctional/serial/LogsCmd (1.73s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.74s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1267: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 logs --file /tmp/TestFunctionalserialLogsFileCmd2876114807/001/logs.txt
functional_test.go:1267: (dbg) Done: out/minikube-linux-arm64 -p functional-264648 logs --file /tmp/TestFunctionalserialLogsFileCmd2876114807/001/logs.txt: (1.741659771s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.74s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.68s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2338: (dbg) Run:  kubectl --context functional-264648 apply -f testdata/invalidsvc.yaml
functional_test.go:2352: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-264648
functional_test.go:2352: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-264648: exit status 115 (611.833333ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31804 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2344: (dbg) Run:  kubectl --context functional-264648 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.68s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1216: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-264648 config get cpus: exit status 14 (101.660471ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1216: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 config set cpus 2
functional_test.go:1216: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 config get cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-264648 config get cpus: exit status 14 (64.758039ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (41.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:922: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-264648 --alsologtostderr -v=1]
2025/02/14 21:27:14 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:927: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-264648 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 307227: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (41.46s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-264648 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:991: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-264648 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (223.756092ms)

                                                
                                                
-- stdout --
	* [functional-264648] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20315
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20315-272800/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20315-272800/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0214 21:26:01.043266  305375 out.go:345] Setting OutFile to fd 1 ...
	I0214 21:26:01.043456  305375 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0214 21:26:01.043468  305375 out.go:358] Setting ErrFile to fd 2...
	I0214 21:26:01.043475  305375 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0214 21:26:01.043704  305375 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20315-272800/.minikube/bin
	I0214 21:26:01.044144  305375 out.go:352] Setting JSON to false
	I0214 21:26:01.045053  305375 start.go:130] hostinfo: {"hostname":"ip-172-31-21-244","uptime":7708,"bootTime":1739560653,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1077-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0214 21:26:01.045128  305375 start.go:140] virtualization:  
	I0214 21:26:01.049798  305375 out.go:177] * [functional-264648] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0214 21:26:01.055046  305375 notify.go:220] Checking for updates...
	I0214 21:26:01.058830  305375 out.go:177]   - MINIKUBE_LOCATION=20315
	I0214 21:26:01.062971  305375 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0214 21:26:01.066441  305375 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20315-272800/kubeconfig
	I0214 21:26:01.069861  305375 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20315-272800/.minikube
	I0214 21:26:01.073108  305375 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0214 21:26:01.076415  305375 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0214 21:26:01.080981  305375 config.go:182] Loaded profile config "functional-264648": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0214 21:26:01.082091  305375 driver.go:394] Setting default libvirt URI to qemu:///system
	I0214 21:26:01.124542  305375 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0214 21:26:01.124703  305375 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0214 21:26:01.188859  305375 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:54 SystemTime:2025-02-14 21:26:01.178092859 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0214 21:26:01.188981  305375 docker.go:318] overlay module found
	I0214 21:26:01.192301  305375 out.go:177] * Using the docker driver based on existing profile
	I0214 21:26:01.195326  305375 start.go:304] selected driver: docker
	I0214 21:26:01.195356  305375 start.go:908] validating driver "docker" against &{Name:functional-264648 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-264648 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0214 21:26:01.195485  305375 start.go:919] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0214 21:26:01.199329  305375 out.go:201] 
	W0214 21:26:01.202595  305375 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0214 21:26:01.205733  305375 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:1008: (dbg) Run:  out/minikube-linux-arm64 start -p functional-264648 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1037: (dbg) Run:  out/minikube-linux-arm64 start -p functional-264648 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-264648 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (216.898308ms)

                                                
                                                
-- stdout --
	* [functional-264648] minikube v1.35.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20315
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20315-272800/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20315-272800/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0214 21:26:32.635623  307058 out.go:345] Setting OutFile to fd 1 ...
	I0214 21:26:32.635776  307058 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0214 21:26:32.635800  307058 out.go:358] Setting ErrFile to fd 2...
	I0214 21:26:32.635822  307058 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0214 21:26:32.637455  307058 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20315-272800/.minikube/bin
	I0214 21:26:32.637927  307058 out.go:352] Setting JSON to false
	I0214 21:26:32.638848  307058 start.go:130] hostinfo: {"hostname":"ip-172-31-21-244","uptime":7740,"bootTime":1739560653,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1077-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0214 21:26:32.639276  307058 start.go:140] virtualization:  
	I0214 21:26:32.642965  307058 out.go:177] * [functional-264648] minikube v1.35.0 sur Ubuntu 20.04 (arm64)
	I0214 21:26:32.647090  307058 out.go:177]   - MINIKUBE_LOCATION=20315
	I0214 21:26:32.647258  307058 notify.go:220] Checking for updates...
	I0214 21:26:32.653541  307058 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0214 21:26:32.656482  307058 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20315-272800/kubeconfig
	I0214 21:26:32.659452  307058 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20315-272800/.minikube
	I0214 21:26:32.662443  307058 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0214 21:26:32.665502  307058 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0214 21:26:32.668982  307058 config.go:182] Loaded profile config "functional-264648": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0214 21:26:32.669575  307058 driver.go:394] Setting default libvirt URI to qemu:///system
	I0214 21:26:32.705868  307058 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0214 21:26:32.706011  307058 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0214 21:26:32.773530  307058 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:54 SystemTime:2025-02-14 21:26:32.763690898 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0214 21:26:32.773645  307058 docker.go:318] overlay module found
	I0214 21:26:32.776745  307058 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0214 21:26:32.779634  307058 start.go:304] selected driver: docker
	I0214 21:26:32.779662  307058 start.go:908] validating driver "docker" against &{Name:functional-264648 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-264648 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0214 21:26:32.779787  307058 start.go:919] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0214 21:26:32.783445  307058 out.go:201] 
	W0214 21:26:32.786385  307058 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0214 21:26:32.789210  307058 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:871: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 status
functional_test.go:877: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:889: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (13.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1644: (dbg) Run:  kubectl --context functional-264648 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1652: (dbg) Run:  kubectl --context functional-264648 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-8449669db6-wwlkb" [5e279cfa-eeb5-41f9-a80f-8258659e4720] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-8449669db6-wwlkb" [5e279cfa-eeb5-41f9-a80f-8258659e4720] Running
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 13.004680132s
functional_test.go:1666: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 service hello-node-connect --url
functional_test.go:1672: found endpoint for hello-node-connect: http://192.168.49.2:30469
functional_test.go:1692: http://192.168.49.2:30469: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-8449669db6-wwlkb

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30469
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (13.63s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 addons list
functional_test.go:1719: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 ssh "echo hello"
functional_test.go:1759: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 ssh -n functional-264648 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 cp functional-264648:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd75107904/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 ssh -n functional-264648 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 ssh -n functional-264648 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.01s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1946: Checking for existence of /etc/test/nested/copy/278186/hosts within VM
functional_test.go:1948: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 ssh "sudo cat /etc/test/nested/copy/278186/hosts"
functional_test.go:1953: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1989: Checking for existence of /etc/ssl/certs/278186.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 ssh "sudo cat /etc/ssl/certs/278186.pem"
functional_test.go:1989: Checking for existence of /usr/share/ca-certificates/278186.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 ssh "sudo cat /usr/share/ca-certificates/278186.pem"
functional_test.go:1989: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/2781862.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 ssh "sudo cat /etc/ssl/certs/2781862.pem"
functional_test.go:2016: Checking for existence of /usr/share/ca-certificates/2781862.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 ssh "sudo cat /usr/share/ca-certificates/2781862.pem"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.13s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:236: (dbg) Run:  kubectl --context functional-264648 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2044: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 ssh "sudo systemctl is-active docker"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-264648 ssh "sudo systemctl is-active docker": exit status 1 (350.52352ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2044: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 ssh "sudo systemctl is-active containerd"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-264648 ssh "sudo systemctl is-active containerd": exit status 1 (360.17647ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2305: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2273: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2287: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 version -o=json --components
functional_test.go:2287: (dbg) Done: out/minikube-linux-arm64 -p functional-264648 version -o=json --components: (1.142944237s)
--- PASS: TestFunctional/parallel/Version/components (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:278: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 image ls --format short --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-arm64 -p functional-264648 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.1
registry.k8s.io/kube-proxy:v1.32.1
registry.k8s.io/kube-controller-manager:v1.32.1
registry.k8s.io/kube-apiserver:v1.32.1
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-264648
localhost/kicbase/echo-server:functional-264648
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20241212-9f82dd49
docker.io/kindest/kindnetd:v20241108-5c6d2daf
functional_test.go:286: (dbg) Stderr: out/minikube-linux-arm64 -p functional-264648 image ls --format short --alsologtostderr:
I0214 21:27:15.962415  307615 out.go:345] Setting OutFile to fd 1 ...
I0214 21:27:15.962595  307615 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0214 21:27:15.962626  307615 out.go:358] Setting ErrFile to fd 2...
I0214 21:27:15.962650  307615 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0214 21:27:15.962938  307615 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20315-272800/.minikube/bin
I0214 21:27:15.963711  307615 config.go:182] Loaded profile config "functional-264648": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0214 21:27:15.963883  307615 config.go:182] Loaded profile config "functional-264648": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0214 21:27:15.964456  307615 cli_runner.go:164] Run: docker container inspect functional-264648 --format={{.State.Status}}
I0214 21:27:15.982173  307615 ssh_runner.go:195] Run: systemctl --version
I0214 21:27:15.982223  307615 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-264648
I0214 21:27:15.999636  307615 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33146 SSHKeyPath:/home/jenkins/minikube-integration/20315-272800/.minikube/machines/functional-264648/id_rsa Username:docker}
I0214 21:27:16.091681  307615 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:278: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 image ls --format table --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-arm64 -p functional-264648 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-proxy              | v1.32.1            | e124fbed851d7 | 98.3MB |
| localhost/minikube-local-cache-test     | functional-264648  | 082342daaaad7 | 3.33kB |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| registry.k8s.io/kube-apiserver          | v1.32.1            | 265c2dedf28ab | 95MB   |
| docker.io/library/nginx                 | latest             | 9b1b7be1ffa60 | 201MB  |
| gcr.io/k8s-minikube/busybox             | latest             | 71a676dd070f4 | 1.63MB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | 2f6c962e7b831 | 61.6MB |
| registry.k8s.io/kube-controller-manager | v1.32.1            | 2933761aa7ada | 88.2MB |
| docker.io/kindest/kindnetd              | v20241108-5c6d2daf | 2be0bcf609c65 | 98.3MB |
| registry.k8s.io/kube-scheduler          | v1.32.1            | ddb38cac617cb | 69MB   |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
| localhost/my-image                      | functional-264648  | 1f01056fbffb4 | 1.64MB |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
| docker.io/kindest/kindnetd              | v20241212-9f82dd49 | e1181ee320546 | 99MB   |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| localhost/kicbase/echo-server           | functional-264648  | ce2d2cda2d858 | 4.79MB |
| registry.k8s.io/pause                   | 3.10               | afb61768ce381 | 520kB  |
| docker.io/library/nginx                 | alpine             | 525fa81b865c3 | 50.8MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| registry.k8s.io/etcd                    | 3.5.16-0           | 7fc9d4aa817aa | 143MB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:286: (dbg) Stderr: out/minikube-linux-arm64 -p functional-264648 image ls --format table --alsologtostderr:
I0214 21:27:20.242300  307960 out.go:345] Setting OutFile to fd 1 ...
I0214 21:27:20.242447  307960 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0214 21:27:20.242481  307960 out.go:358] Setting ErrFile to fd 2...
I0214 21:27:20.242495  307960 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0214 21:27:20.242782  307960 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20315-272800/.minikube/bin
I0214 21:27:20.243582  307960 config.go:182] Loaded profile config "functional-264648": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0214 21:27:20.243760  307960 config.go:182] Loaded profile config "functional-264648": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0214 21:27:20.244308  307960 cli_runner.go:164] Run: docker container inspect functional-264648 --format={{.State.Status}}
I0214 21:27:20.262985  307960 ssh_runner.go:195] Run: systemctl --version
I0214 21:27:20.263191  307960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-264648
I0214 21:27:20.280693  307960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33146 SSHKeyPath:/home/jenkins/minikube-integration/20315-272800/.minikube/machines/functional-264648/id_rsa Username:docker}
I0214 21:27:20.372051  307960 ssh_runner.go:195] Run: sudo crictl images --output json
E0214 21:27:45.968074  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/addons-794492/client.crt: no such file or directory" logger="UnhandledError"
E0214 21:28:13.673501  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/addons-794492/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:278: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 image ls --format json --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-arm64 -p functional-264648 image ls --format json --alsologtostderr:
[{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1634527"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"},{"id":"7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36d
d5f356d59ce8c3a82","repoDigests":["registry.k8s.io/etcd@sha256:313adb872febec3a5740969d5bf1b1df0d222f8fa06675f34db5a7fc437356a1","registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5"],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"size":"143226622"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:e50b7059b633caf3c1449b8da680d11845cda4506b513ee7a2de00725f0a34a7","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"519877"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"2f6c962e7b8311337352d9fdea917da2184d991
9f4da7695bc2a6517cf392fe4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:31440a2bef59e2f1ffb600113b557103740ff851e27b0aef5b849f6e3ab994a6","registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61647114"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8
s.io/pause:latest"],"size":"246070"},{"id":"64110886608891cdd4c06b8e630e56d935e200dcd236dbb660fc5bd88367c829","repoDigests":["docker.io/library/a88a090d2383b7cd780a124ff1602999fc4bb54ebc2b61bb82b22d365aecf311-tmp@sha256:215fa2c935420b5fc95e6c87ad2a22219595e041fa82af34632502a2a0cb1ebd"],"repoTags":[],"size":"1637644"},{"id":"525fa81b865c3bef8743265945df1859f8f0cb06a4f71aacbcb54f2fbd5a57d8","repoDigests":["docker.io/library/nginx@sha256:b471bb609adc83f73c2d95148cf1bd683408739a3c09c0afc666ea2af0037aef","docker.io/library/nginx@sha256:df7b6963d5252424319dc265539db380d75bd4e40e112a83f8eaf09d1a9cb909"],"repoTags":["docker.io/library/nginx:alpine"],"size":"50780648"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/stora
ge-provisioner:v5"],"size":"29037500"},{"id":"082342daaaad754390e0c6bde8475ecf25793e114a7751a025ca9d525f2210ae","repoDigests":["localhost/minikube-local-cache-test@sha256:10e91b4453211846e5263593c3dfb53d07de85a8673b22e53095103aa92bcbd0"],"repoTags":["localhost/minikube-local-cache-test:functional-264648"],"size":"3330"},{"id":"1f01056fbffb45cb8baaa0cce72d2d55ef97f5bae89313c9a0a7ebe7c48474ef","repoDigests":["localhost/my-image@sha256:ba63c9ba68f9646f9d655b7feec70e829c032ec0310c4113185897483d655181"],"repoTags":["localhost/my-image:functional-264648"],"size":"1640226"},{"id":"265c2dedf28ab9b88c7910c1643e210ad62483867f2bab88f56919a6e49a0d19","repoDigests":["registry.k8s.io/kube-apiserver@sha256:88154e5cc4415415c0cbfb49ad1d63ea2de74614b7b567d5f344c5bcb5c5f244","registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac"],"repoTags":["registry.k8s.io/kube-apiserver:v1.32.1"],"size":"94991840"},{"id":"e124fbed851d756107a6153db4dc52269a2fd34af3cc46f00a2ef113f868aab0","repo
Digests":["registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5","registry.k8s.io/kube-proxy@sha256:0d36a8e2f0f6a06753c1ae9949a9a4a58d752f8364fd2ab083fcd836c37f844d"],"repoTags":["registry.k8s.io/kube-proxy:v1.32.1"],"size":"98313623"},{"id":"2be0bcf609c6573ee83e676c747f31bda661ab2d4e039c51839e38fd258d2903","repoDigests":["docker.io/kindest/kindnetd@sha256:de216f6245e142905c8022d424959a65f798fcd26f5b7492b9c0b0391d735c3e","docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108"],"repoTags":["docker.io/kindest/kindnetd:v20241108-5c6d2daf"],"size":"98274354"},{"id":"e1181ee320546c66f17956a302db1b7899d88a593f116726718851133de588b6","repoDigests":["docker.io/kindest/kindnetd@sha256:564b9fd29e72542e4baa14b382d4d9ee22132141caa6b9803b71faf9a4a799be","docker.io/kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26"],"repoTags":["docker.io/kindest/kindnetd:v20241212-9f82dd49"],"size":"99018802"},
{"id":"9b1b7be1ffa607d40d545607d3fdf441f08553468adec5588fb58499ad77fe58","repoDigests":["docker.io/library/nginx@sha256:91734281c0ebfc6f1aea979cffeed5079cfe786228a71cc6f1f46a228cde6e34","docker.io/library/nginx@sha256:a90c36c019f67444c32f4d361084d4b5b853dbf8701055df7908bf42cc613cdd"],"repoTags":["docker.io/library/nginx:latest"],"size":"201397159"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["localhost/kicbase/echo-server:functional-264648"],"size":"4788229"},{"id":"2933761aa7adae93679cdde1c
0bf457bd4dc4b53f95fc066a4c50aa9c375ea13","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:71478b03f55b6a17c25fee181fbaaafb7ac4f5314c4007eb0cf3d35fb20938e3","registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.32.1"],"size":"88241478"},{"id":"ddb38cac617cb18802e09e448db4b3aa70e9e469b02defa76e6de7192847a71c","repoDigests":["registry.k8s.io/kube-scheduler@sha256:244bf1ea0194bd050a2408898d65b6a3259624fdd5a3541788b40b4e94c02fc1","registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e"],"repoTags":["registry.k8s.io/kube-scheduler:v1.32.1"],"size":"68973892"}]
functional_test.go:286: (dbg) Stderr: out/minikube-linux-arm64 -p functional-264648 image ls --format json --alsologtostderr:
I0214 21:27:19.990017  307929 out.go:345] Setting OutFile to fd 1 ...
I0214 21:27:19.990206  307929 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0214 21:27:19.990235  307929 out.go:358] Setting ErrFile to fd 2...
I0214 21:27:19.990257  307929 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0214 21:27:19.990507  307929 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20315-272800/.minikube/bin
I0214 21:27:19.991247  307929 config.go:182] Loaded profile config "functional-264648": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0214 21:27:19.991422  307929 config.go:182] Loaded profile config "functional-264648": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0214 21:27:19.991946  307929 cli_runner.go:164] Run: docker container inspect functional-264648 --format={{.State.Status}}
I0214 21:27:20.018095  307929 ssh_runner.go:195] Run: systemctl --version
I0214 21:27:20.018150  307929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-264648
I0214 21:27:20.039209  307929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33146 SSHKeyPath:/home/jenkins/minikube-integration/20315-272800/.minikube/machines/functional-264648/id_rsa Username:docker}
I0214 21:27:20.131691  307929 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:278: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 image ls --format yaml --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-arm64 -p functional-264648 image ls --format yaml --alsologtostderr:
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: 525fa81b865c3bef8743265945df1859f8f0cb06a4f71aacbcb54f2fbd5a57d8
repoDigests:
- docker.io/library/nginx@sha256:b471bb609adc83f73c2d95148cf1bd683408739a3c09c0afc666ea2af0037aef
- docker.io/library/nginx@sha256:df7b6963d5252424319dc265539db380d75bd4e40e112a83f8eaf09d1a9cb909
repoTags:
- docker.io/library/nginx:alpine
size: "50780648"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: 7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82
repoDigests:
- registry.k8s.io/etcd@sha256:313adb872febec3a5740969d5bf1b1df0d222f8fa06675f34db5a7fc437356a1
- registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "143226622"
- id: e124fbed851d756107a6153db4dc52269a2fd34af3cc46f00a2ef113f868aab0
repoDigests:
- registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5
- registry.k8s.io/kube-proxy@sha256:0d36a8e2f0f6a06753c1ae9949a9a4a58d752f8364fd2ab083fcd836c37f844d
repoTags:
- registry.k8s.io/kube-proxy:v1.32.1
size: "98313623"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:e50b7059b633caf3c1449b8da680d11845cda4506b513ee7a2de00725f0a34a7
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "519877"
- id: e1181ee320546c66f17956a302db1b7899d88a593f116726718851133de588b6
repoDigests:
- docker.io/kindest/kindnetd@sha256:564b9fd29e72542e4baa14b382d4d9ee22132141caa6b9803b71faf9a4a799be
- docker.io/kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26
repoTags:
- docker.io/kindest/kindnetd:v20241212-9f82dd49
size: "99018802"
- id: 2933761aa7adae93679cdde1c0bf457bd4dc4b53f95fc066a4c50aa9c375ea13
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:71478b03f55b6a17c25fee181fbaaafb7ac4f5314c4007eb0cf3d35fb20938e3
- registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.1
size: "88241478"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 9b1b7be1ffa607d40d545607d3fdf441f08553468adec5588fb58499ad77fe58
repoDigests:
- docker.io/library/nginx@sha256:91734281c0ebfc6f1aea979cffeed5079cfe786228a71cc6f1f46a228cde6e34
- docker.io/library/nginx@sha256:a90c36c019f67444c32f4d361084d4b5b853dbf8701055df7908bf42cc613cdd
repoTags:
- docker.io/library/nginx:latest
size: "201397159"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- localhost/kicbase/echo-server:functional-264648
size: "4788229"
- id: 082342daaaad754390e0c6bde8475ecf25793e114a7751a025ca9d525f2210ae
repoDigests:
- localhost/minikube-local-cache-test@sha256:10e91b4453211846e5263593c3dfb53d07de85a8673b22e53095103aa92bcbd0
repoTags:
- localhost/minikube-local-cache-test:functional-264648
size: "3330"
- id: 265c2dedf28ab9b88c7910c1643e210ad62483867f2bab88f56919a6e49a0d19
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:88154e5cc4415415c0cbfb49ad1d63ea2de74614b7b567d5f344c5bcb5c5f244
- registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.1
size: "94991840"
- id: 2be0bcf609c6573ee83e676c747f31bda661ab2d4e039c51839e38fd258d2903
repoDigests:
- docker.io/kindest/kindnetd@sha256:de216f6245e142905c8022d424959a65f798fcd26f5b7492b9c0b0391d735c3e
- docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108
repoTags:
- docker.io/kindest/kindnetd:v20241108-5c6d2daf
size: "98274354"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:31440a2bef59e2f1ffb600113b557103740ff851e27b0aef5b849f6e3ab994a6
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61647114"
- id: ddb38cac617cb18802e09e448db4b3aa70e9e469b02defa76e6de7192847a71c
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:244bf1ea0194bd050a2408898d65b6a3259624fdd5a3541788b40b4e94c02fc1
- registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.1
size: "68973892"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"

                                                
                                                
functional_test.go:286: (dbg) Stderr: out/minikube-linux-arm64 -p functional-264648 image ls --format yaml --alsologtostderr:
I0214 21:27:16.197934  307647 out.go:345] Setting OutFile to fd 1 ...
I0214 21:27:16.198099  307647 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0214 21:27:16.198127  307647 out.go:358] Setting ErrFile to fd 2...
I0214 21:27:16.198144  307647 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0214 21:27:16.198511  307647 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20315-272800/.minikube/bin
I0214 21:27:16.199865  307647 config.go:182] Loaded profile config "functional-264648": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0214 21:27:16.200063  307647 config.go:182] Loaded profile config "functional-264648": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0214 21:27:16.200597  307647 cli_runner.go:164] Run: docker container inspect functional-264648 --format={{.State.Status}}
I0214 21:27:16.217581  307647 ssh_runner.go:195] Run: systemctl --version
I0214 21:27:16.217635  307647 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-264648
I0214 21:27:16.236782  307647 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33146 SSHKeyPath:/home/jenkins/minikube-integration/20315-272800/.minikube/machines/functional-264648/id_rsa Username:docker}
I0214 21:27:16.323500  307647 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 ssh pgrep buildkitd
functional_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-264648 ssh pgrep buildkitd: exit status 1 (279.524899ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:332: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 image build -t localhost/my-image:functional-264648 testdata/build --alsologtostderr
functional_test.go:332: (dbg) Done: out/minikube-linux-arm64 -p functional-264648 image build -t localhost/my-image:functional-264648 testdata/build --alsologtostderr: (3.037008178s)
functional_test.go:337: (dbg) Stdout: out/minikube-linux-arm64 -p functional-264648 image build -t localhost/my-image:functional-264648 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 64110886608
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-264648
--> 1f01056fbff
Successfully tagged localhost/my-image:functional-264648
1f01056fbffb45cb8baaa0cce72d2d55ef97f5bae89313c9a0a7ebe7c48474ef
functional_test.go:340: (dbg) Stderr: out/minikube-linux-arm64 -p functional-264648 image build -t localhost/my-image:functional-264648 testdata/build --alsologtostderr:
I0214 21:27:16.711198  307734 out.go:345] Setting OutFile to fd 1 ...
I0214 21:27:16.712303  307734 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0214 21:27:16.712344  307734 out.go:358] Setting ErrFile to fd 2...
I0214 21:27:16.712365  307734 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0214 21:27:16.712656  307734 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20315-272800/.minikube/bin
I0214 21:27:16.713424  307734 config.go:182] Loaded profile config "functional-264648": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0214 21:27:16.714805  307734 config.go:182] Loaded profile config "functional-264648": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0214 21:27:16.715351  307734 cli_runner.go:164] Run: docker container inspect functional-264648 --format={{.State.Status}}
I0214 21:27:16.733972  307734 ssh_runner.go:195] Run: systemctl --version
I0214 21:27:16.734030  307734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-264648
I0214 21:27:16.757766  307734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33146 SSHKeyPath:/home/jenkins/minikube-integration/20315-272800/.minikube/machines/functional-264648/id_rsa Username:docker}
I0214 21:27:16.847519  307734 build_images.go:161] Building image from path: /tmp/build.3390331973.tar
I0214 21:27:16.847589  307734 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0214 21:27:16.856668  307734 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3390331973.tar
I0214 21:27:16.860210  307734 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3390331973.tar: stat -c "%s %y" /var/lib/minikube/build/build.3390331973.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3390331973.tar': No such file or directory
I0214 21:27:16.860241  307734 ssh_runner.go:362] scp /tmp/build.3390331973.tar --> /var/lib/minikube/build/build.3390331973.tar (3072 bytes)
I0214 21:27:16.885182  307734 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3390331973
I0214 21:27:16.894527  307734 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3390331973 -xf /var/lib/minikube/build/build.3390331973.tar
I0214 21:27:16.904486  307734 crio.go:315] Building image: /var/lib/minikube/build/build.3390331973
I0214 21:27:16.904574  307734 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-264648 /var/lib/minikube/build/build.3390331973 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I0214 21:27:19.662198  307734 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-264648 /var/lib/minikube/build/build.3390331973 --cgroup-manager=cgroupfs: (2.757580244s)
I0214 21:27:19.662266  307734 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3390331973
I0214 21:27:19.672997  307734 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3390331973.tar
I0214 21:27:19.682939  307734 build_images.go:217] Built localhost/my-image:functional-264648 from /tmp/build.3390331973.tar
I0214 21:27:19.682972  307734 build_images.go:133] succeeded building to: functional-264648
I0214 21:27:19.682977  307734 build_images.go:134] failed building to: 
functional_test.go:468: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:359: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:364: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-264648
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2136: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2136: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2136: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:372: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 image load --daemon kicbase/echo-server:functional-264648 --alsologtostderr
functional_test.go:372: (dbg) Done: out/minikube-linux-arm64 -p functional-264648 image load --daemon kicbase/echo-server:functional-264648 --alsologtostderr: (1.413690766s)
functional_test.go:468: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 image load --daemon kicbase/echo-server:functional-264648 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1287: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1292: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1327: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1332: Took "436.80004ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1341: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1346: Took "90.065328ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:252: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:257: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-264648
functional_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 image load --daemon kicbase/echo-server:functional-264648 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1378: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1383: Took "454.994714ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1391: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1396: Took "73.916428ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:397: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 image save kicbase/echo-server:functional-264648 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-264648 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-264648 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-264648 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-264648 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 304011: os: process already finished
helpers_test.go:502: unable to terminate pid 303848: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 image rm kicbase/echo-server:functional-264648 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-264648 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-264648 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [e1886b8f-6691-4c7e-805d-802b0b68b5dd] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [e1886b8f-6691-4c7e-805d-802b0b68b5dd] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.004343046s
I0214 21:25:37.440940  278186 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:426: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:436: (dbg) Run:  docker rmi kicbase/echo-server:functional-264648
functional_test.go:441: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 image save --daemon kicbase/echo-server:functional-264648 --alsologtostderr
E0214 21:25:29.831948  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/addons-794492/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:449: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-264648
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-264648 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.98.19.185 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-264648 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1454: (dbg) Run:  kubectl --context functional-264648 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1462: (dbg) Run:  kubectl --context functional-264648 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64fc58db8c-j7vnx" [530613c7-f619-460f-9b31-78e5753ad45d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64fc58db8c-j7vnx" [530613c7-f619-460f-9b31-78e5753ad45d] Running
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.002943986s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1476: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1506: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 service list -o json
functional_test.go:1511: Took "513.821169ms" to run "out/minikube-linux-arm64 -p functional-264648 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1526: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 service --namespace=default --https --url hello-node
functional_test.go:1539: found endpoint: https://192.168.49.2:30577
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1557: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1576: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 service hello-node --url
functional_test.go:1582: found endpoint for hello-node: http://192.168.49.2:30577
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (25.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-264648 /tmp/TestFunctionalparallelMountCmdany-port3051396763/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1739568361499970433" to /tmp/TestFunctionalparallelMountCmdany-port3051396763/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1739568361499970433" to /tmp/TestFunctionalparallelMountCmdany-port3051396763/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1739568361499970433" to /tmp/TestFunctionalparallelMountCmdany-port3051396763/001/test-1739568361499970433
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-264648 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (374.876162ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0214 21:26:01.876723  278186 retry.go:31] will retry after 405.009171ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Feb 14 21:26 created-by-test
-rw-r--r-- 1 docker docker 24 Feb 14 21:26 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Feb 14 21:26 test-1739568361499970433
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 ssh cat /mount-9p/test-1739568361499970433
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-264648 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [fe30a078-e92b-4ba9-9592-600e99e3cd33] Pending
helpers_test.go:344: "busybox-mount" [fe30a078-e92b-4ba9-9592-600e99e3cd33] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [fe30a078-e92b-4ba9-9592-600e99e3cd33] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [fe30a078-e92b-4ba9-9592-600e99e3cd33] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 23.003272364s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-264648 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-264648 /tmp/TestFunctionalparallelMountCmdany-port3051396763/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (25.82s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-264648 /tmp/TestFunctionalparallelMountCmdspecific-port2271042969/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-264648 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (367.608242ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0214 21:26:27.686830  278186 retry.go:31] will retry after 670.222142ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-264648 /tmp/TestFunctionalparallelMountCmdspecific-port2271042969/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-264648 ssh "sudo umount -f /mount-9p": exit status 1 (274.733024ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-264648 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-264648 /tmp/TestFunctionalparallelMountCmdspecific-port2271042969/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.07s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-264648 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1478408973/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-264648 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1478408973/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-264648 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1478408973/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-264648 ssh "findmnt -T" /mount1: exit status 1 (590.286851ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0214 21:26:29.987282  278186 retry.go:31] will retry after 628.013688ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-264648 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-264648 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-264648 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1478408973/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-264648 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1478408973/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-264648 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1478408973/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.14s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-264648
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:215: (dbg) Run:  docker rmi -f localhost/my-image:functional-264648
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:223: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-264648
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (176.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 start --ha --memory 2200 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E0214 21:30:27.976354  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/functional-264648/client.crt: no such file or directory" logger="UnhandledError"
E0214 21:30:27.982760  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/functional-264648/client.crt: no such file or directory" logger="UnhandledError"
E0214 21:30:27.994152  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/functional-264648/client.crt: no such file or directory" logger="UnhandledError"
E0214 21:30:28.015659  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/functional-264648/client.crt: no such file or directory" logger="UnhandledError"
E0214 21:30:28.057093  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/functional-264648/client.crt: no such file or directory" logger="UnhandledError"
E0214 21:30:28.138511  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/functional-264648/client.crt: no such file or directory" logger="UnhandledError"
E0214 21:30:28.299994  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/functional-264648/client.crt: no such file or directory" logger="UnhandledError"
E0214 21:30:28.621487  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/functional-264648/client.crt: no such file or directory" logger="UnhandledError"
E0214 21:30:29.263084  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/functional-264648/client.crt: no such file or directory" logger="UnhandledError"
E0214 21:30:30.544460  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/functional-264648/client.crt: no such file or directory" logger="UnhandledError"
E0214 21:30:33.106913  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/functional-264648/client.crt: no such file or directory" logger="UnhandledError"
E0214 21:30:38.229132  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/functional-264648/client.crt: no such file or directory" logger="UnhandledError"
E0214 21:30:48.470529  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/functional-264648/client.crt: no such file or directory" logger="UnhandledError"
E0214 21:31:08.951964  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/functional-264648/client.crt: no such file or directory" logger="UnhandledError"
E0214 21:31:49.913247  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/functional-264648/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-929900 start --ha --memory 2200 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (2m55.802486544s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (176.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (8.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-929900 kubectl -- rollout status deployment/busybox: (5.185406316s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 kubectl -- exec busybox-58667487b6-2xm74 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 kubectl -- exec busybox-58667487b6-s9vb7 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 kubectl -- exec busybox-58667487b6-tm8dd -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 kubectl -- exec busybox-58667487b6-2xm74 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 kubectl -- exec busybox-58667487b6-s9vb7 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 kubectl -- exec busybox-58667487b6-tm8dd -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 kubectl -- exec busybox-58667487b6-2xm74 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 kubectl -- exec busybox-58667487b6-s9vb7 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 kubectl -- exec busybox-58667487b6-tm8dd -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (8.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 kubectl -- exec busybox-58667487b6-2xm74 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 kubectl -- exec busybox-58667487b6-2xm74 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 kubectl -- exec busybox-58667487b6-s9vb7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 kubectl -- exec busybox-58667487b6-s9vb7 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 kubectl -- exec busybox-58667487b6-tm8dd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 kubectl -- exec busybox-58667487b6-tm8dd -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (28.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-929900 node add --alsologtostderr -v 5: (27.813116016s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-929900 status --alsologtostderr -v 5: (1.049529076s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (28.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-929900 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 status --output json --alsologtostderr -v 5
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 cp testdata/cp-test.txt ha-929900:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 ssh -n ha-929900 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 cp ha-929900:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile51223015/001/cp-test_ha-929900.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 ssh -n ha-929900 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 cp ha-929900:/home/docker/cp-test.txt ha-929900-m02:/home/docker/cp-test_ha-929900_ha-929900-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 ssh -n ha-929900 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 ssh -n ha-929900-m02 "sudo cat /home/docker/cp-test_ha-929900_ha-929900-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 cp ha-929900:/home/docker/cp-test.txt ha-929900-m03:/home/docker/cp-test_ha-929900_ha-929900-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 ssh -n ha-929900 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 ssh -n ha-929900-m03 "sudo cat /home/docker/cp-test_ha-929900_ha-929900-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 cp ha-929900:/home/docker/cp-test.txt ha-929900-m04:/home/docker/cp-test_ha-929900_ha-929900-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 ssh -n ha-929900 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 ssh -n ha-929900-m04 "sudo cat /home/docker/cp-test_ha-929900_ha-929900-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 cp testdata/cp-test.txt ha-929900-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 ssh -n ha-929900-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 cp ha-929900-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile51223015/001/cp-test_ha-929900-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 ssh -n ha-929900-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 cp ha-929900-m02:/home/docker/cp-test.txt ha-929900:/home/docker/cp-test_ha-929900-m02_ha-929900.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 ssh -n ha-929900-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 ssh -n ha-929900 "sudo cat /home/docker/cp-test_ha-929900-m02_ha-929900.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 cp ha-929900-m02:/home/docker/cp-test.txt ha-929900-m03:/home/docker/cp-test_ha-929900-m02_ha-929900-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 ssh -n ha-929900-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 ssh -n ha-929900-m03 "sudo cat /home/docker/cp-test_ha-929900-m02_ha-929900-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 cp ha-929900-m02:/home/docker/cp-test.txt ha-929900-m04:/home/docker/cp-test_ha-929900-m02_ha-929900-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 ssh -n ha-929900-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 ssh -n ha-929900-m04 "sudo cat /home/docker/cp-test_ha-929900-m02_ha-929900-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 cp testdata/cp-test.txt ha-929900-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 ssh -n ha-929900-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 cp ha-929900-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile51223015/001/cp-test_ha-929900-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 ssh -n ha-929900-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 cp ha-929900-m03:/home/docker/cp-test.txt ha-929900:/home/docker/cp-test_ha-929900-m03_ha-929900.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 ssh -n ha-929900-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 ssh -n ha-929900 "sudo cat /home/docker/cp-test_ha-929900-m03_ha-929900.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 cp ha-929900-m03:/home/docker/cp-test.txt ha-929900-m02:/home/docker/cp-test_ha-929900-m03_ha-929900-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 ssh -n ha-929900-m03 "sudo cat /home/docker/cp-test.txt"
E0214 21:32:45.968615  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/addons-794492/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 ssh -n ha-929900-m02 "sudo cat /home/docker/cp-test_ha-929900-m03_ha-929900-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 cp ha-929900-m03:/home/docker/cp-test.txt ha-929900-m04:/home/docker/cp-test_ha-929900-m03_ha-929900-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 ssh -n ha-929900-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 ssh -n ha-929900-m04 "sudo cat /home/docker/cp-test_ha-929900-m03_ha-929900-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 cp testdata/cp-test.txt ha-929900-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 ssh -n ha-929900-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 cp ha-929900-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile51223015/001/cp-test_ha-929900-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 ssh -n ha-929900-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 cp ha-929900-m04:/home/docker/cp-test.txt ha-929900:/home/docker/cp-test_ha-929900-m04_ha-929900.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 ssh -n ha-929900-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 ssh -n ha-929900 "sudo cat /home/docker/cp-test_ha-929900-m04_ha-929900.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 cp ha-929900-m04:/home/docker/cp-test.txt ha-929900-m02:/home/docker/cp-test_ha-929900-m04_ha-929900-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 ssh -n ha-929900-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 ssh -n ha-929900-m02 "sudo cat /home/docker/cp-test_ha-929900-m04_ha-929900-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 cp ha-929900-m04:/home/docker/cp-test.txt ha-929900-m03:/home/docker/cp-test_ha-929900-m04_ha-929900-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 ssh -n ha-929900-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 ssh -n ha-929900-m03 "sudo cat /home/docker/cp-test_ha-929900-m04_ha-929900-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-929900 node stop m02 --alsologtostderr -v 5: (11.977045993s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-929900 status --alsologtostderr -v 5: exit status 7 (772.778823ms)

                                                
                                                
-- stdout --
	ha-929900
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-929900-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-929900-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-929900-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0214 21:33:04.200923  324152 out.go:345] Setting OutFile to fd 1 ...
	I0214 21:33:04.201071  324152 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0214 21:33:04.201095  324152 out.go:358] Setting ErrFile to fd 2...
	I0214 21:33:04.201103  324152 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0214 21:33:04.201398  324152 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20315-272800/.minikube/bin
	I0214 21:33:04.201660  324152 out.go:352] Setting JSON to false
	I0214 21:33:04.201718  324152 mustload.go:65] Loading cluster: ha-929900
	I0214 21:33:04.201812  324152 notify.go:220] Checking for updates...
	I0214 21:33:04.202287  324152 config.go:182] Loaded profile config "ha-929900": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0214 21:33:04.202318  324152 status.go:174] checking status of ha-929900 ...
	I0214 21:33:04.202947  324152 cli_runner.go:164] Run: docker container inspect ha-929900 --format={{.State.Status}}
	I0214 21:33:04.224571  324152 status.go:371] ha-929900 host status = "Running" (err=<nil>)
	I0214 21:33:04.224598  324152 host.go:66] Checking if "ha-929900" exists ...
	I0214 21:33:04.224921  324152 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-929900
	I0214 21:33:04.250471  324152 host.go:66] Checking if "ha-929900" exists ...
	I0214 21:33:04.250854  324152 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0214 21:33:04.251131  324152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-929900
	I0214 21:33:04.270481  324152 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33151 SSHKeyPath:/home/jenkins/minikube-integration/20315-272800/.minikube/machines/ha-929900/id_rsa Username:docker}
	I0214 21:33:04.364808  324152 ssh_runner.go:195] Run: systemctl --version
	I0214 21:33:04.369379  324152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0214 21:33:04.381333  324152 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0214 21:33:04.457259  324152 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:true NGoroutines:73 SystemTime:2025-02-14 21:33:04.447610329 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0214 21:33:04.458022  324152 kubeconfig.go:125] found "ha-929900" server: "https://192.168.49.254:8443"
	I0214 21:33:04.458070  324152 api_server.go:166] Checking apiserver status ...
	I0214 21:33:04.458146  324152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:33:04.470547  324152 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1440/cgroup
	I0214 21:33:04.485240  324152 api_server.go:182] apiserver freezer: "5:freezer:/docker/cd1462b96ee9c578a1265c1b2b062c72538b0ff54ca96378a4556ec3c87361a9/crio/crio-345d3f240c28de142de2076a6bb411c8a4f71ffddadc9b04f0dd39ada3d63c76"
	I0214 21:33:04.485312  324152 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/cd1462b96ee9c578a1265c1b2b062c72538b0ff54ca96378a4556ec3c87361a9/crio/crio-345d3f240c28de142de2076a6bb411c8a4f71ffddadc9b04f0dd39ada3d63c76/freezer.state
	I0214 21:33:04.497634  324152 api_server.go:204] freezer state: "THAWED"
	I0214 21:33:04.497679  324152 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0214 21:33:04.507247  324152 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0214 21:33:04.507277  324152 status.go:463] ha-929900 apiserver status = Running (err=<nil>)
	I0214 21:33:04.507288  324152 status.go:176] ha-929900 status: &{Name:ha-929900 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0214 21:33:04.507306  324152 status.go:174] checking status of ha-929900-m02 ...
	I0214 21:33:04.507625  324152 cli_runner.go:164] Run: docker container inspect ha-929900-m02 --format={{.State.Status}}
	I0214 21:33:04.536933  324152 status.go:371] ha-929900-m02 host status = "Stopped" (err=<nil>)
	I0214 21:33:04.536959  324152 status.go:384] host is not running, skipping remaining checks
	I0214 21:33:04.536966  324152 status.go:176] ha-929900-m02 status: &{Name:ha-929900-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0214 21:33:04.536987  324152 status.go:174] checking status of ha-929900-m03 ...
	I0214 21:33:04.537324  324152 cli_runner.go:164] Run: docker container inspect ha-929900-m03 --format={{.State.Status}}
	I0214 21:33:04.555955  324152 status.go:371] ha-929900-m03 host status = "Running" (err=<nil>)
	I0214 21:33:04.555985  324152 host.go:66] Checking if "ha-929900-m03" exists ...
	I0214 21:33:04.556324  324152 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-929900-m03
	I0214 21:33:04.574806  324152 host.go:66] Checking if "ha-929900-m03" exists ...
	I0214 21:33:04.575194  324152 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0214 21:33:04.575250  324152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-929900-m03
	I0214 21:33:04.594389  324152 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33161 SSHKeyPath:/home/jenkins/minikube-integration/20315-272800/.minikube/machines/ha-929900-m03/id_rsa Username:docker}
	I0214 21:33:04.696745  324152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0214 21:33:04.708783  324152 kubeconfig.go:125] found "ha-929900" server: "https://192.168.49.254:8443"
	I0214 21:33:04.708816  324152 api_server.go:166] Checking apiserver status ...
	I0214 21:33:04.708887  324152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:33:04.720151  324152 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1350/cgroup
	I0214 21:33:04.729733  324152 api_server.go:182] apiserver freezer: "5:freezer:/docker/02a572349f877ce0fd7bdd8776e441759f5793e179c194ef38b3edcc3527f4f1/crio/crio-d2ab2e6992fac82bc9a69d06ee39aae4ab22a6671962c08911bc1b0819f24b9f"
	I0214 21:33:04.729805  324152 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/02a572349f877ce0fd7bdd8776e441759f5793e179c194ef38b3edcc3527f4f1/crio/crio-d2ab2e6992fac82bc9a69d06ee39aae4ab22a6671962c08911bc1b0819f24b9f/freezer.state
	I0214 21:33:04.738704  324152 api_server.go:204] freezer state: "THAWED"
	I0214 21:33:04.738732  324152 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0214 21:33:04.746958  324152 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0214 21:33:04.746988  324152 status.go:463] ha-929900-m03 apiserver status = Running (err=<nil>)
	I0214 21:33:04.746998  324152 status.go:176] ha-929900-m03 status: &{Name:ha-929900-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0214 21:33:04.747014  324152 status.go:174] checking status of ha-929900-m04 ...
	I0214 21:33:04.747363  324152 cli_runner.go:164] Run: docker container inspect ha-929900-m04 --format={{.State.Status}}
	I0214 21:33:04.764538  324152 status.go:371] ha-929900-m04 host status = "Running" (err=<nil>)
	I0214 21:33:04.764564  324152 host.go:66] Checking if "ha-929900-m04" exists ...
	I0214 21:33:04.764863  324152 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-929900-m04
	I0214 21:33:04.781865  324152 host.go:66] Checking if "ha-929900-m04" exists ...
	I0214 21:33:04.782189  324152 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0214 21:33:04.782246  324152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-929900-m04
	I0214 21:33:04.799495  324152 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33166 SSHKeyPath:/home/jenkins/minikube-integration/20315-272800/.minikube/machines/ha-929900-m04/id_rsa Username:docker}
	I0214 21:33:04.888027  324152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0214 21:33:04.899252  324152 status.go:176] ha-929900-m04 status: &{Name:ha-929900-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (32.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 node start m02 --alsologtostderr -v 5
E0214 21:33:11.835483  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/functional-264648/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-929900 node start m02 --alsologtostderr -v 5: (31.152672268s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-929900 status --alsologtostderr -v 5: (1.443484436s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (32.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.399550017s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (140.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-929900 stop --alsologtostderr -v 5: (36.950490491s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 start --wait true --alsologtostderr -v 5
E0214 21:35:27.975881  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/functional-264648/client.crt: no such file or directory" logger="UnhandledError"
E0214 21:35:55.677249  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/functional-264648/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-929900 start --wait true --alsologtostderr -v 5: (1m42.889232549s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (140.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-929900 node delete m03 --alsologtostderr -v 5: (10.773231306s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-929900 stop --alsologtostderr -v 5: (35.601892109s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-929900 status --alsologtostderr -v 5: exit status 7 (121.377003ms)

                                                
                                                
-- stdout --
	ha-929900
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-929900-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-929900-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0214 21:36:48.095939  338235 out.go:345] Setting OutFile to fd 1 ...
	I0214 21:36:48.096080  338235 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0214 21:36:48.096092  338235 out.go:358] Setting ErrFile to fd 2...
	I0214 21:36:48.096108  338235 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0214 21:36:48.096393  338235 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20315-272800/.minikube/bin
	I0214 21:36:48.096591  338235 out.go:352] Setting JSON to false
	I0214 21:36:48.096646  338235 mustload.go:65] Loading cluster: ha-929900
	I0214 21:36:48.096716  338235 notify.go:220] Checking for updates...
	I0214 21:36:48.097754  338235 config.go:182] Loaded profile config "ha-929900": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0214 21:36:48.097789  338235 status.go:174] checking status of ha-929900 ...
	I0214 21:36:48.098356  338235 cli_runner.go:164] Run: docker container inspect ha-929900 --format={{.State.Status}}
	I0214 21:36:48.117598  338235 status.go:371] ha-929900 host status = "Stopped" (err=<nil>)
	I0214 21:36:48.117624  338235 status.go:384] host is not running, skipping remaining checks
	I0214 21:36:48.117632  338235 status.go:176] ha-929900 status: &{Name:ha-929900 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0214 21:36:48.117662  338235 status.go:174] checking status of ha-929900-m02 ...
	I0214 21:36:48.117979  338235 cli_runner.go:164] Run: docker container inspect ha-929900-m02 --format={{.State.Status}}
	I0214 21:36:48.143615  338235 status.go:371] ha-929900-m02 host status = "Stopped" (err=<nil>)
	I0214 21:36:48.143640  338235 status.go:384] host is not running, skipping remaining checks
	I0214 21:36:48.143648  338235 status.go:176] ha-929900-m02 status: &{Name:ha-929900-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0214 21:36:48.143668  338235 status.go:174] checking status of ha-929900-m04 ...
	I0214 21:36:48.143973  338235 cli_runner.go:164] Run: docker container inspect ha-929900-m04 --format={{.State.Status}}
	I0214 21:36:48.161046  338235 status.go:371] ha-929900-m04 host status = "Stopped" (err=<nil>)
	I0214 21:36:48.161067  338235 status.go:384] host is not running, skipping remaining checks
	I0214 21:36:48.161073  338235 status.go:176] ha-929900-m04 status: &{Name:ha-929900-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (114.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E0214 21:37:45.968683  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/addons-794492/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-929900 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m53.524609901s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (114.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (68.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 node add --control-plane --alsologtostderr -v 5
E0214 21:39:09.035658  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/addons-794492/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-929900 node add --control-plane --alsologtostderr -v 5: (1m7.885148928s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-929900 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-929900 status --alsologtostderr -v 5: (1.009370597s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (68.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.05621221s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.06s)

                                                
                                    
x
+
TestJSONOutput/start/Command (48.58s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-175107 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E0214 21:40:27.976556  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/functional-264648/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-175107 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (48.569482099s)
--- PASS: TestJSONOutput/start/Command (48.58s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.76s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-175107 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.76s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-175107 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.83s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-175107 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-175107 --output=json --user=testUser: (5.831727058s)
--- PASS: TestJSONOutput/stop/Command (5.83s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.26s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-974965 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-974965 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (101.058363ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9246a4c5-a398-4cdc-bfa7-143ec58dc47a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-974965] minikube v1.35.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ac6e9966-afe9-403f-9fdb-d1d0258bde38","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20315"}}
	{"specversion":"1.0","id":"3a09995c-cb9b-456a-95aa-001b90ba0f54","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"3b2c43b2-f33d-47e3-9efa-3320f459fb9c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20315-272800/kubeconfig"}}
	{"specversion":"1.0","id":"c77551ed-1771-4958-8608-be4686ad3c12","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20315-272800/.minikube"}}
	{"specversion":"1.0","id":"34078909-069b-44af-934b-9fbda57068d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"4a60fb16-29b9-412c-b833-7c4b6c675a89","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"89488fef-faef-4fed-a985-28458604c6d8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-974965" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-974965
--- PASS: TestErrorJSONOutput (0.26s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (39.6s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-166785 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-166785 --network=: (37.451867965s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-166785" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-166785
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-166785: (2.127623139s)
--- PASS: TestKicCustomNetwork/create_custom_network (39.60s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (34.28s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-585407 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-585407 --network=bridge: (32.240618873s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-585407" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-585407
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-585407: (2.007151447s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (34.28s)

                                                
                                    
x
+
TestKicExistingNetwork (32.34s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0214 21:42:16.752970  278186 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0214 21:42:16.770081  278186 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0214 21:42:16.770904  278186 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0214 21:42:16.771889  278186 cli_runner.go:164] Run: docker network inspect existing-network
W0214 21:42:16.788325  278186 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0214 21:42:16.788355  278186 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0214 21:42:16.788373  278186 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0214 21:42:16.789219  278186 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0214 21:42:16.810806  278186 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d0519224eb73 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:ed:9e:3b:21} reservation:<nil>}
I0214 21:42:16.811863  278186 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400187f0b0}
I0214 21:42:16.811905  278186 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0214 21:42:16.811969  278186 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0214 21:42:16.879982  278186 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-008178 --network=existing-network
E0214 21:42:45.968446  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/addons-794492/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-008178 --network=existing-network: (30.111013177s)
helpers_test.go:175: Cleaning up "existing-network-008178" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-008178
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-008178: (2.072607761s)
I0214 21:42:49.079955  278186 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (32.34s)

                                                
                                    
x
+
TestKicCustomSubnet (36.98s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-190560 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-190560 --subnet=192.168.60.0/24: (34.874934908s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-190560 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-190560" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-190560
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-190560: (2.074204926s)
--- PASS: TestKicCustomSubnet (36.98s)

                                                
                                    
x
+
TestKicStaticIP (34.96s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-791867 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-791867 --static-ip=192.168.200.200: (32.509092109s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-791867 ip
helpers_test.go:175: Cleaning up "static-ip-791867" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-791867
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-791867: (2.302069799s)
--- PASS: TestKicStaticIP (34.96s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (67.82s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-396906 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-396906 --driver=docker  --container-runtime=crio: (29.180739938s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-399983 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-399983 --driver=docker  --container-runtime=crio: (32.802117725s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-396906
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-399983
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-399983" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-399983
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-399983: (2.005206494s)
helpers_test.go:175: Cleaning up "first-396906" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-396906
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-396906: (2.339314546s)
--- PASS: TestMinikubeProfile (67.82s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.95s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-073173 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-073173 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.945839515s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.95s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-073173 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.03s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-075561 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-075561 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.026934668s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.03s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-075561 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-073173 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-073173 --alsologtostderr -v=5: (1.636995131s)
--- PASS: TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-075561 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-075561
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-075561: (1.204219754s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.75s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-075561
E0214 21:45:27.976475  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/functional-264648/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-075561: (6.749529817s)
--- PASS: TestMountStart/serial/RestartStopped (7.75s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-075561 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (81.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-076743 --wait=true --memory=2200 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E0214 21:46:51.039199  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/functional-264648/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-076743 --wait=true --memory=2200 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m20.917602249s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-076743 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (81.42s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-076743 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-076743 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-076743 -- rollout status deployment/busybox: (4.438314554s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-076743 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-076743 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-076743 -- exec busybox-58667487b6-8572v -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-076743 -- exec busybox-58667487b6-pnhcg -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-076743 -- exec busybox-58667487b6-8572v -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-076743 -- exec busybox-58667487b6-pnhcg -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-076743 -- exec busybox-58667487b6-8572v -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-076743 -- exec busybox-58667487b6-pnhcg -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.41s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-076743 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-076743 -- exec busybox-58667487b6-8572v -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-076743 -- exec busybox-58667487b6-8572v -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-076743 -- exec busybox-58667487b6-pnhcg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-076743 -- exec busybox-58667487b6-pnhcg -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.05s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (28.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-076743 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-076743 -v=5 --alsologtostderr: (27.335710874s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-076743 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (28.02s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-076743 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.71s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-076743 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-076743 cp testdata/cp-test.txt multinode-076743:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-076743 ssh -n multinode-076743 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-076743 cp multinode-076743:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2810477348/001/cp-test_multinode-076743.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-076743 ssh -n multinode-076743 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-076743 cp multinode-076743:/home/docker/cp-test.txt multinode-076743-m02:/home/docker/cp-test_multinode-076743_multinode-076743-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-076743 ssh -n multinode-076743 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-076743 ssh -n multinode-076743-m02 "sudo cat /home/docker/cp-test_multinode-076743_multinode-076743-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-076743 cp multinode-076743:/home/docker/cp-test.txt multinode-076743-m03:/home/docker/cp-test_multinode-076743_multinode-076743-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-076743 ssh -n multinode-076743 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-076743 ssh -n multinode-076743-m03 "sudo cat /home/docker/cp-test_multinode-076743_multinode-076743-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-076743 cp testdata/cp-test.txt multinode-076743-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-076743 ssh -n multinode-076743-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-076743 cp multinode-076743-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2810477348/001/cp-test_multinode-076743-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-076743 ssh -n multinode-076743-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-076743 cp multinode-076743-m02:/home/docker/cp-test.txt multinode-076743:/home/docker/cp-test_multinode-076743-m02_multinode-076743.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-076743 ssh -n multinode-076743-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-076743 ssh -n multinode-076743 "sudo cat /home/docker/cp-test_multinode-076743-m02_multinode-076743.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-076743 cp multinode-076743-m02:/home/docker/cp-test.txt multinode-076743-m03:/home/docker/cp-test_multinode-076743-m02_multinode-076743-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-076743 ssh -n multinode-076743-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-076743 ssh -n multinode-076743-m03 "sudo cat /home/docker/cp-test_multinode-076743-m02_multinode-076743-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-076743 cp testdata/cp-test.txt multinode-076743-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-076743 ssh -n multinode-076743-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-076743 cp multinode-076743-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2810477348/001/cp-test_multinode-076743-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-076743 ssh -n multinode-076743-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-076743 cp multinode-076743-m03:/home/docker/cp-test.txt multinode-076743:/home/docker/cp-test_multinode-076743-m03_multinode-076743.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-076743 ssh -n multinode-076743-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-076743 ssh -n multinode-076743 "sudo cat /home/docker/cp-test_multinode-076743-m03_multinode-076743.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-076743 cp multinode-076743-m03:/home/docker/cp-test.txt multinode-076743-m02:/home/docker/cp-test_multinode-076743-m03_multinode-076743-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-076743 ssh -n multinode-076743-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-076743 ssh -n multinode-076743-m02 "sudo cat /home/docker/cp-test_multinode-076743-m03_multinode-076743-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.04s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-076743 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-076743 node stop m03: (1.249968444s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-076743 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-076743 status: exit status 7 (523.164799ms)

                                                
                                                
-- stdout --
	multinode-076743
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-076743-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-076743-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-076743 status --alsologtostderr
E0214 21:47:45.968627  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/addons-794492/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-076743 status --alsologtostderr: exit status 7 (530.941822ms)

                                                
                                                
-- stdout --
	multinode-076743
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-076743-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-076743-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0214 21:47:45.908447  392011 out.go:345] Setting OutFile to fd 1 ...
	I0214 21:47:45.908693  392011 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0214 21:47:45.908721  392011 out.go:358] Setting ErrFile to fd 2...
	I0214 21:47:45.908740  392011 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0214 21:47:45.909058  392011 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20315-272800/.minikube/bin
	I0214 21:47:45.909292  392011 out.go:352] Setting JSON to false
	I0214 21:47:45.909365  392011 mustload.go:65] Loading cluster: multinode-076743
	I0214 21:47:45.909425  392011 notify.go:220] Checking for updates...
	I0214 21:47:45.909872  392011 config.go:182] Loaded profile config "multinode-076743": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0214 21:47:45.910214  392011 status.go:174] checking status of multinode-076743 ...
	I0214 21:47:45.911354  392011 cli_runner.go:164] Run: docker container inspect multinode-076743 --format={{.State.Status}}
	I0214 21:47:45.931256  392011 status.go:371] multinode-076743 host status = "Running" (err=<nil>)
	I0214 21:47:45.931284  392011 host.go:66] Checking if "multinode-076743" exists ...
	I0214 21:47:45.931623  392011 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-076743
	I0214 21:47:45.952827  392011 host.go:66] Checking if "multinode-076743" exists ...
	I0214 21:47:45.953147  392011 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0214 21:47:45.953205  392011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-076743
	I0214 21:47:45.974410  392011 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33271 SSHKeyPath:/home/jenkins/minikube-integration/20315-272800/.minikube/machines/multinode-076743/id_rsa Username:docker}
	I0214 21:47:46.068904  392011 ssh_runner.go:195] Run: systemctl --version
	I0214 21:47:46.073343  392011 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0214 21:47:46.086239  392011 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0214 21:47:46.148559  392011 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:63 SystemTime:2025-02-14 21:47:46.137013128 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0214 21:47:46.149446  392011 kubeconfig.go:125] found "multinode-076743" server: "https://192.168.67.2:8443"
	I0214 21:47:46.149497  392011 api_server.go:166] Checking apiserver status ...
	I0214 21:47:46.149547  392011 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:47:46.160714  392011 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1367/cgroup
	I0214 21:47:46.171185  392011 api_server.go:182] apiserver freezer: "5:freezer:/docker/712e99ee0782b779d3ed87f894927d984a25086a54c4838d80e14703452d0132/crio/crio-da30b34607c184bae0141746c2d782e04702a2c91a306237407bcf962e6d192c"
	I0214 21:47:46.171257  392011 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/712e99ee0782b779d3ed87f894927d984a25086a54c4838d80e14703452d0132/crio/crio-da30b34607c184bae0141746c2d782e04702a2c91a306237407bcf962e6d192c/freezer.state
	I0214 21:47:46.180487  392011 api_server.go:204] freezer state: "THAWED"
	I0214 21:47:46.180521  392011 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0214 21:47:46.188865  392011 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0214 21:47:46.188897  392011 status.go:463] multinode-076743 apiserver status = Running (err=<nil>)
	I0214 21:47:46.188915  392011 status.go:176] multinode-076743 status: &{Name:multinode-076743 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0214 21:47:46.188934  392011 status.go:174] checking status of multinode-076743-m02 ...
	I0214 21:47:46.189260  392011 cli_runner.go:164] Run: docker container inspect multinode-076743-m02 --format={{.State.Status}}
	I0214 21:47:46.209142  392011 status.go:371] multinode-076743-m02 host status = "Running" (err=<nil>)
	I0214 21:47:46.209169  392011 host.go:66] Checking if "multinode-076743-m02" exists ...
	I0214 21:47:46.209517  392011 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-076743-m02
	I0214 21:47:46.232151  392011 host.go:66] Checking if "multinode-076743-m02" exists ...
	I0214 21:47:46.233430  392011 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0214 21:47:46.233495  392011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-076743-m02
	I0214 21:47:46.251466  392011 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33276 SSHKeyPath:/home/jenkins/minikube-integration/20315-272800/.minikube/machines/multinode-076743-m02/id_rsa Username:docker}
	I0214 21:47:46.344185  392011 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0214 21:47:46.356183  392011 status.go:176] multinode-076743-m02 status: &{Name:multinode-076743-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0214 21:47:46.356222  392011 status.go:174] checking status of multinode-076743-m03 ...
	I0214 21:47:46.356538  392011 cli_runner.go:164] Run: docker container inspect multinode-076743-m03 --format={{.State.Status}}
	I0214 21:47:46.373900  392011 status.go:371] multinode-076743-m03 host status = "Stopped" (err=<nil>)
	I0214 21:47:46.373923  392011 status.go:384] host is not running, skipping remaining checks
	I0214 21:47:46.373930  392011 status.go:176] multinode-076743-m03 status: &{Name:multinode-076743-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.30s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-076743 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-076743 node start m03 -v=5 --alsologtostderr: (7.000982605s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-076743 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.78s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (78.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-076743
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-076743
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-076743: (24.772730442s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-076743 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-076743 --wait=true -v=5 --alsologtostderr: (53.822738622s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-076743
--- PASS: TestMultiNode/serial/RestartKeepsNodes (78.73s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-076743 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-076743 node delete m03: (4.847751351s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-076743 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.51s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-076743 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-076743 stop: (23.612604357s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-076743 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-076743 status: exit status 7 (105.649227ms)

                                                
                                                
-- stdout --
	multinode-076743
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-076743-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-076743 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-076743 status --alsologtostderr: exit status 7 (108.524163ms)

                                                
                                                
-- stdout --
	multinode-076743
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-076743-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0214 21:49:42.177907  399770 out.go:345] Setting OutFile to fd 1 ...
	I0214 21:49:42.178123  399770 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0214 21:49:42.178159  399770 out.go:358] Setting ErrFile to fd 2...
	I0214 21:49:42.178184  399770 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0214 21:49:42.178461  399770 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20315-272800/.minikube/bin
	I0214 21:49:42.178698  399770 out.go:352] Setting JSON to false
	I0214 21:49:42.178778  399770 mustload.go:65] Loading cluster: multinode-076743
	I0214 21:49:42.178850  399770 notify.go:220] Checking for updates...
	I0214 21:49:42.179958  399770 config.go:182] Loaded profile config "multinode-076743": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0214 21:49:42.180045  399770 status.go:174] checking status of multinode-076743 ...
	I0214 21:49:42.180702  399770 cli_runner.go:164] Run: docker container inspect multinode-076743 --format={{.State.Status}}
	I0214 21:49:42.202083  399770 status.go:371] multinode-076743 host status = "Stopped" (err=<nil>)
	I0214 21:49:42.202107  399770 status.go:384] host is not running, skipping remaining checks
	I0214 21:49:42.202115  399770 status.go:176] multinode-076743 status: &{Name:multinode-076743 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0214 21:49:42.202157  399770 status.go:174] checking status of multinode-076743-m02 ...
	I0214 21:49:42.202488  399770 cli_runner.go:164] Run: docker container inspect multinode-076743-m02 --format={{.State.Status}}
	I0214 21:49:42.231297  399770 status.go:371] multinode-076743-m02 host status = "Stopped" (err=<nil>)
	I0214 21:49:42.231320  399770 status.go:384] host is not running, skipping remaining checks
	I0214 21:49:42.231327  399770 status.go:176] multinode-076743-m02 status: &{Name:multinode-076743-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.83s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (52.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-076743 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E0214 21:50:27.976370  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/functional-264648/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-076743 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (51.773530462s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-076743 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (52.45s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (34.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-076743
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-076743-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-076743-m02 --driver=docker  --container-runtime=crio: exit status 14 (92.863683ms)

                                                
                                                
-- stdout --
	* [multinode-076743-m02] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20315
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20315-272800/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20315-272800/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-076743-m02' is duplicated with machine name 'multinode-076743-m02' in profile 'multinode-076743'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-076743-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-076743-m03 --driver=docker  --container-runtime=crio: (31.871204204s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-076743
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-076743: exit status 80 (322.211106ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-076743 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-076743-m03 already exists in multinode-076743-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-076743-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-076743-m03: (2.016562968s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (34.37s)

                                                
                                    
x
+
TestPreload (134.06s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-940515 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-940515 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m30.278578182s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-940515 image pull gcr.io/k8s-minikube/busybox
E0214 21:52:45.968248  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/addons-794492/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-940515 image pull gcr.io/k8s-minikube/busybox: (3.472056286s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-940515
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-940515: (5.752215459s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-940515 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-940515 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (31.92909659s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-940515 image list
helpers_test.go:175: Cleaning up "test-preload-940515" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-940515
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-940515: (2.369440612s)
--- PASS: TestPreload (134.06s)

                                                
                                    
x
+
TestInsufficientStorage (11.34s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-416277 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-416277 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (8.83556667s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c9551371-5b8c-4a11-9b07-f70246d3a087","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-416277] minikube v1.35.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4e910155-01fe-4c43-851c-50d0c6a28e9f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20315"}}
	{"specversion":"1.0","id":"f38585a1-aba9-4674-a6b9-24079add7f8e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0f34b2f3-927f-406c-9d6d-e72c73b3c345","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20315-272800/kubeconfig"}}
	{"specversion":"1.0","id":"136b9d3a-2494-4525-a0d7-1914d7929512","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20315-272800/.minikube"}}
	{"specversion":"1.0","id":"e6158445-8672-4739-9b99-b0dca7bb9c16","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"3fd76de1-fd36-404d-9124-e069e13f8e65","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"4bd9ec16-9359-40a4-8255-ecc595161617","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"752d6e77-4694-4a72-b696-c7f360662971","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"ffd3a789-c002-4274-a4a8-c8a16264fc46","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"cf157cd5-70d2-4c29-a1b2-2ad5f19d093f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"54059a70-f7bc-4273-b472-ef64a730c69f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-416277\" primary control-plane node in \"insufficient-storage-416277\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"29e9f9a8-9fe2-4ae6-9f42-3a3107c08e1a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.46-1739182054-20387 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"767cb502-f7e1-4a02-aa5a-5eaa5b55d552","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"45004148-d45a-4b9b-b163-fe6161c1154f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-416277 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-416277 --output=json --layout=cluster: exit status 7 (286.077073ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-416277","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-416277","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0214 21:54:13.129160  417311 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-416277" does not appear in /home/jenkins/minikube-integration/20315-272800/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-416277 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-416277 --output=json --layout=cluster: exit status 7 (277.144915ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-416277","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-416277","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0214 21:54:13.408200  417374 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-416277" does not appear in /home/jenkins/minikube-integration/20315-272800/kubeconfig
	E0214 21:54:13.418234  417374 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/insufficient-storage-416277/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-416277" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-416277
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-416277: (1.937861825s)
--- PASS: TestInsufficientStorage (11.34s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (131.98s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2877808203 start -p running-upgrade-872262 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2877808203 start -p running-upgrade-872262 --memory=2200 --vm-driver=docker  --container-runtime=crio: (38.778471055s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-872262 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-872262 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m29.733613244s)
helpers_test.go:175: Cleaning up "running-upgrade-872262" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-872262
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-872262: (2.668212873s)
--- PASS: TestRunningBinaryUpgrade (131.98s)

                                                
                                    
x
+
TestKubernetesUpgrade (408.02s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-308144 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-308144 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m24.096033487s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-308144
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-308144: (2.026716488s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-308144 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-308144 status --format={{.Host}}: exit status 7 (117.88994ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-308144 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0214 22:02:45.968256  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/addons-794492/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-308144 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m37.473851235s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-308144 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-308144 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-308144 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (122.910849ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-308144] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20315
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20315-272800/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20315-272800/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-308144
	    minikube start -p kubernetes-upgrade-308144 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3081442 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.1, by running:
	    
	    minikube start -p kubernetes-upgrade-308144 --kubernetes-version=v1.32.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-308144 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-308144 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (41.223735258s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-308144" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-308144
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-308144: (2.84249233s)
--- PASS: TestKubernetesUpgrade (408.02s)

                                                
                                    
x
+
TestMissingContainerUpgrade (110.12s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.3509923057 start -p missing-upgrade-330949 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.3509923057 start -p missing-upgrade-330949 --memory=2200 --driver=docker  --container-runtime=crio: (38.253112869s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-330949
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-330949: (10.427270829s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-330949
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-330949 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0214 22:00:27.975686  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/functional-264648/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-330949 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (58.558350303s)
helpers_test.go:175: Cleaning up "missing-upgrade-330949" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-330949
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-330949: (2.08986668s)
--- PASS: TestMissingContainerUpgrade (110.12s)

                                                
                                    
x
+
TestPause/serial/Start (63.86s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-289178 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-289178 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m3.861100929s)
--- PASS: TestPause/serial/Start (63.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-676031 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-676031 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (116.401875ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-676031] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20315
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20315-272800/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20315-272800/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (40.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-676031 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-676031 --driver=docker  --container-runtime=crio: (39.898780156s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-676031 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (40.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (18.84s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-676031 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-676031 --no-kubernetes --driver=docker  --container-runtime=crio: (16.548218704s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-676031 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-676031 status -o json: exit status 2 (322.899266ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-676031","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-676031
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-676031: (1.969886012s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (18.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.92s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-676031 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-676031 --no-kubernetes --driver=docker  --container-runtime=crio: (8.923210998s)
--- PASS: TestNoKubernetes/serial/Start (8.92s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (24.55s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-289178 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-289178 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (24.541665724s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (24.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-676031 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-676031 "sudo systemctl is-active --quiet service kubelet": exit status 1 (258.359632ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.97s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-676031
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-676031: (1.228293319s)
--- PASS: TestNoKubernetes/serial/Stop (1.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.02s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-676031 --driver=docker  --container-runtime=crio
E0214 21:55:27.976072  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/functional-264648/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-676031 --driver=docker  --container-runtime=crio: (8.017817917s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.02s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-676031 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-676031 "sudo systemctl is-active --quiet service kubelet": exit status 1 (433.50475ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-840948 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-840948 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (204.760417ms)

                                                
                                                
-- stdout --
	* [false-840948] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20315
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20315-272800/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20315-272800/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0214 21:55:40.168046  428812 out.go:345] Setting OutFile to fd 1 ...
	I0214 21:55:40.168196  428812 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0214 21:55:40.168207  428812 out.go:358] Setting ErrFile to fd 2...
	I0214 21:55:40.168212  428812 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0214 21:55:40.168447  428812 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20315-272800/.minikube/bin
	I0214 21:55:40.168917  428812 out.go:352] Setting JSON to false
	I0214 21:55:40.169947  428812 start.go:130] hostinfo: {"hostname":"ip-172-31-21-244","uptime":9487,"bootTime":1739560653,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1077-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0214 21:55:40.170031  428812 start.go:140] virtualization:  
	I0214 21:55:40.173924  428812 out.go:177] * [false-840948] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0214 21:55:40.177114  428812 notify.go:220] Checking for updates...
	I0214 21:55:40.180507  428812 out.go:177]   - MINIKUBE_LOCATION=20315
	I0214 21:55:40.183603  428812 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0214 21:55:40.186753  428812 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20315-272800/kubeconfig
	I0214 21:55:40.189788  428812 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20315-272800/.minikube
	I0214 21:55:40.192843  428812 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0214 21:55:40.195809  428812 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0214 21:55:40.199437  428812 config.go:182] Loaded profile config "pause-289178": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0214 21:55:40.199540  428812 driver.go:394] Setting default libvirt URI to qemu:///system
	I0214 21:55:40.238157  428812 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0214 21:55:40.238271  428812 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0214 21:55:40.299432  428812 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:54 SystemTime:2025-02-14 21:55:40.289278877 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0214 21:55:40.299547  428812 docker.go:318] overlay module found
	I0214 21:55:40.302633  428812 out.go:177] * Using the docker driver based on user configuration
	I0214 21:55:40.305733  428812 start.go:304] selected driver: docker
	I0214 21:55:40.305759  428812 start.go:908] validating driver "docker" against <nil>
	I0214 21:55:40.305774  428812 start.go:919] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0214 21:55:40.309382  428812 out.go:201] 
	W0214 21:55:40.312389  428812 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0214 21:55:40.315213  428812 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-840948 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-840948

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-840948

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-840948

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-840948

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-840948

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-840948

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-840948

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-840948

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-840948

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-840948

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-840948"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-840948"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-840948"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-840948

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-840948"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-840948"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-840948" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-840948" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-840948" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-840948" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-840948" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-840948" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-840948" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-840948" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-840948"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-840948"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-840948"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-840948"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-840948"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-840948" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-840948" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-840948" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-840948"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-840948"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-840948"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-840948"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-840948"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20315-272800/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 14 Feb 2025 21:55:29 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.76.2:8443
name: pause-289178
contexts:
- context:
cluster: pause-289178
extensions:
- extension:
last-update: Fri, 14 Feb 2025 21:55:29 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: pause-289178
name: pause-289178
current-context: pause-289178
kind: Config
preferences: {}
users:
- name: pause-289178
user:
client-certificate: /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/pause-289178/client.crt
client-key: /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/pause-289178/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-840948

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-840948"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-840948"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-840948"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-840948"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-840948"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-840948"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-840948"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-840948"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-840948"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-840948"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-840948"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-840948"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-840948"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-840948"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-840948"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-840948"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-840948"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-840948"

                                                
                                                
----------------------- debugLogs end: false-840948 [took: 3.645323578s] --------------------------------
helpers_test.go:175: Cleaning up "false-840948" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-840948
--- PASS: TestNetworkPlugins/group/false (4.07s)

                                                
                                    
x
+
TestPause/serial/Pause (0.94s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-289178 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.94s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.45s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-289178 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-289178 --output=json --layout=cluster: exit status 2 (445.840561ms)

                                                
                                                
-- stdout --
	{"Name":"pause-289178","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-289178","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.45s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.85s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-289178 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.85s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.11s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-289178 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-289178 --alsologtostderr -v=5: (1.114344577s)
--- PASS: TestPause/serial/PauseAgain (1.11s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.04s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-289178 --alsologtostderr -v=5
E0214 21:55:49.036910  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/addons-794492/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-289178 --alsologtostderr -v=5: (3.038313336s)
--- PASS: TestPause/serial/DeletePaused (3.04s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.17s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-289178
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-289178: exit status 1 (18.056805ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-289178: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.17s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.79s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.79s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (122.05s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.339139235 start -p stopped-upgrade-779232 --memory=2200 --vm-driver=docker  --container-runtime=crio
E0214 21:57:45.971688  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/addons-794492/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.339139235 start -p stopped-upgrade-779232 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m22.527426236s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.339139235 -p stopped-upgrade-779232 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.339139235 -p stopped-upgrade-779232 stop: (2.530137922s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-779232 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-779232 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (36.977221753s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (122.05s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.16s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-779232
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-779232: (1.159816406s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (62.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-840948 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E0214 22:03:31.040854  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/functional-264648/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-840948 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m2.595108116s)
--- PASS: TestNetworkPlugins/group/auto/Start (62.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-840948 "pgrep -a kubelet"
I0214 22:04:31.593275  278186 config.go:182] Loaded profile config "auto-840948": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-840948 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-g57x8" [4b704ffc-444e-469d-9385-793cd50fba92] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-g57x8" [4b704ffc-444e-469d-9385-793cd50fba92] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.003406056s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-840948 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-840948 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-840948 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (55.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-840948 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E0214 22:05:27.975924  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/functional-264648/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-840948 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (55.610493224s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (55.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-7bm96" [e4256f0a-e555-4233-b0a0-2f3160aeaabf] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003579873s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-840948 "pgrep -a kubelet"
I0214 22:06:07.212702  278186 config.go:182] Loaded profile config "kindnet-840948": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-840948 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-nxzpw" [8d8c1f68-88c6-4da1-87fc-198dd72df5d6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-nxzpw" [8d8c1f68-88c6-4da1-87fc-198dd72df5d6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.003696411s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-840948 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-840948 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-840948 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (66.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-840948 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-840948 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m6.668901075s)
--- PASS: TestNetworkPlugins/group/calico/Start (66.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (64.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-840948 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E0214 22:07:45.968195  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/addons-794492/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-840948 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m4.705461042s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (64.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-v6db6" [19d22a7a-5bef-49ea-9e3b-7b867973e760] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:344: "calico-node-v6db6" [19d22a7a-5bef-49ea-9e3b-7b867973e760] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004515275s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-840948 "pgrep -a kubelet"
I0214 22:07:56.273360  278186 config.go:182] Loaded profile config "calico-840948": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-840948 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-mbx4t" [0b17cca9-251a-4a73-a3f4-28ad3b46989e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-mbx4t" [0b17cca9-251a-4a73-a3f4-28ad3b46989e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.004806454s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-840948 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-840948 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-840948 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (75.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-840948 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-840948 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m15.74778473s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (75.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-840948 "pgrep -a kubelet"
I0214 22:08:34.758793  278186 config.go:182] Loaded profile config "custom-flannel-840948": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-840948 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-l9rjm" [e96395a5-6c06-403b-abb8-64f3aa10e8a1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-l9rjm" [e96395a5-6c06-403b-abb8-64f3aa10e8a1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.005594147s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-840948 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-840948 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-840948 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (45.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-840948 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E0214 22:09:31.908758  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/auto-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:09:31.915235  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/auto-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:09:31.926540  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/auto-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:09:31.947938  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/auto-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:09:31.990180  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/auto-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:09:32.071575  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/auto-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:09:32.233824  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/auto-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:09:32.555424  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/auto-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:09:33.197231  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/auto-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:09:34.479092  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/auto-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:09:37.041378  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/auto-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:09:42.162680  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/auto-840948/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-840948 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (45.365445673s)
--- PASS: TestNetworkPlugins/group/flannel/Start (45.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-840948 "pgrep -a kubelet"
I0214 22:09:47.990689  278186 config.go:182] Loaded profile config "enable-default-cni-840948": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-840948 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-j4vhp" [b451c969-8c13-4fed-858e-5bec20c875b9] Pending
E0214 22:09:52.404876  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/auto-840948/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-j4vhp" [b451c969-8c13-4fed-858e-5bec20c875b9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.008667157s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-wgzzw" [8547cb27-9dfb-4591-82a8-3ec7286fe3eb] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003012519s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-840948 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-840948 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-840948 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-840948 "pgrep -a kubelet"
I0214 22:10:04.400766  278186 config.go:182] Loaded profile config "flannel-840948": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-840948 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-dk6x5" [d8475904-0f80-490e-8f03-d61f72752272] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-dk6x5" [d8475904-0f80-490e-8f03-d61f72752272] Running
E0214 22:10:12.886587  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/auto-840948/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.003659023s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-840948 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-840948 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-840948 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (79.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-840948 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E0214 22:10:27.976164  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/functional-264648/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-840948 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m19.31927919s)
--- PASS: TestNetworkPlugins/group/bridge/Start (79.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (150.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-553294 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
E0214 22:10:53.849790  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/auto-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:11:00.920947  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/kindnet-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:11:00.927312  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/kindnet-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:11:00.939321  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/kindnet-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:11:00.961575  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/kindnet-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:11:01.003303  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/kindnet-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:11:01.085517  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/kindnet-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:11:01.247174  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/kindnet-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:11:01.568542  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/kindnet-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:11:02.210583  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/kindnet-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:11:03.492787  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/kindnet-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:11:06.054482  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/kindnet-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:11:11.176139  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/kindnet-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:11:21.418462  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/kindnet-840948/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-553294 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m30.221080423s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (150.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-840948 "pgrep -a kubelet"
E0214 22:11:41.900577  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/kindnet-840948/client.crt: no such file or directory" logger="UnhandledError"
I0214 22:11:42.172635  278186 config.go:182] Loaded profile config "bridge-840948": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-840948 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-8vg5p" [f8b5a42a-f61b-48e8-8101-fd7442364e3f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-8vg5p" [f8b5a42a-f61b-48e8-8101-fd7442364e3f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004173764s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-840948 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-840948 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-840948 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (63.59s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-135482 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1
E0214 22:12:15.771891  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/auto-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:12:22.862327  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/kindnet-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:12:29.038947  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/addons-794492/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:12:45.968679  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/addons-794492/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:12:49.847435  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/calico-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:12:49.853765  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/calico-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:12:49.865076  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/calico-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:12:49.886757  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/calico-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:12:49.928080  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/calico-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:12:50.009435  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/calico-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:12:50.170718  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/calico-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:12:50.492224  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/calico-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:12:51.134228  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/calico-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:12:52.416467  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/calico-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:12:54.978169  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/calico-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:13:00.099513  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/calico-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:13:10.340775  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/calico-840948/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-135482 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1: (1m3.588792294s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (63.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-553294 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0606994a-cae6-4905-af0a-811635cd2be6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [0606994a-cae6-4905-af0a-811635cd2be6] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.003784431s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-553294 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.52s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-135482 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [33133ad7-b193-4cdd-9d59-929374787aa5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [33133ad7-b193-4cdd-9d59-929374787aa5] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004204625s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-135482 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-553294 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-553294 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.012565283s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-553294 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-553294 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-553294 --alsologtostderr -v=3: (12.033439324s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-135482 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-135482 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-135482 --alsologtostderr -v=3
E0214 22:13:30.822171  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/calico-840948/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-135482 --alsologtostderr -v=3: (11.935602915s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.94s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-553294 -n old-k8s-version-553294
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-553294 -n old-k8s-version-553294: exit status 7 (74.319409ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-553294 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0214 22:13:35.068359  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/custom-flannel-840948/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (115.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-553294 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
E0214 22:13:35.075072  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/custom-flannel-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:13:35.086382  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/custom-flannel-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:13:35.107742  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/custom-flannel-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:13:35.152132  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/custom-flannel-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:13:35.234359  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/custom-flannel-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:13:35.395620  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/custom-flannel-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:13:35.717732  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/custom-flannel-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:13:36.359981  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/custom-flannel-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:13:37.641748  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/custom-flannel-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:13:40.203644  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/custom-flannel-840948/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-553294 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (1m54.89459249s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-553294 -n old-k8s-version-553294
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (115.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-135482 -n no-preload-135482
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-135482 -n no-preload-135482: exit status 7 (87.854893ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-135482 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (53.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-135482 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1
E0214 22:13:44.783690  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/kindnet-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:13:45.325398  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/custom-flannel-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:13:55.567404  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/custom-flannel-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:14:11.783837  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/calico-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:14:16.049377  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/custom-flannel-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:14:31.909561  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/auto-840948/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-135482 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1: (52.666930549s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-135482 -n no-preload-135482
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (53.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-xwhxn" [b2756809-50ab-4d14-8318-d2b2b22931ad] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003699319s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-xwhxn" [b2756809-50ab-4d14-8318-d2b2b22931ad] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003336971s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-135482 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-135482 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-135482 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-135482 -n no-preload-135482
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-135482 -n no-preload-135482: exit status 2 (320.672665ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-135482 -n no-preload-135482
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-135482 -n no-preload-135482: exit status 2 (309.804126ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-135482 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-135482 -n no-preload-135482
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-135482 -n no-preload-135482
E0214 22:14:48.277315  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/enable-default-cni-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:14:48.283605  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/enable-default-cni-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:14:48.295132  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/enable-default-cni-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:14:48.316408  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/enable-default-cni-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:14:48.357752  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/enable-default-cni-840948/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (50.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-356160 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1
E0214 22:14:53.408260  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/enable-default-cni-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:14:57.011231  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/custom-flannel-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:14:58.095674  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/flannel-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:14:58.102027  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/flannel-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:14:58.113390  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/flannel-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:14:58.134743  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/flannel-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:14:58.176184  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/flannel-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:14:58.257501  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/flannel-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:14:58.419216  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/flannel-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:14:58.529860  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/enable-default-cni-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:14:58.741391  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/flannel-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:14:59.382722  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/flannel-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:14:59.613628  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/auto-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:15:00.664403  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/flannel-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:15:03.225723  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/flannel-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:15:08.347263  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/flannel-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:15:08.771607  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/enable-default-cni-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:15:18.589248  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/flannel-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:15:27.975652  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/functional-264648/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:15:29.253702  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/enable-default-cni-840948/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-356160 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1: (50.040821435s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (50.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-47xbs" [52ccf5e0-7775-4be5-9653-76e9624f316b] Running
E0214 22:15:33.706438  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/calico-840948/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003326015s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-47xbs" [52ccf5e0-7775-4be5-9653-76e9624f316b] Running
E0214 22:15:39.070706  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/flannel-840948/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003757498s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-553294 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.48s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-356160 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7160f59d-631e-4434-b42f-071128cc4a49] Pending
helpers_test.go:344: "busybox" [7160f59d-631e-4434-b42f-071128cc4a49] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [7160f59d-631e-4434-b42f-071128cc4a49] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.003550937s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-356160 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-553294 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-553294 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-553294 -n old-k8s-version-553294
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-553294 -n old-k8s-version-553294: exit status 2 (335.362521ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-553294 -n old-k8s-version-553294
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-553294 -n old-k8s-version-553294: exit status 2 (335.747192ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-553294 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-553294 -n old-k8s-version-553294
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-553294 -n old-k8s-version-553294
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (59.75s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-552505 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-552505 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1: (59.746305318s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (59.75s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.85s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-356160 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-356160 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.704612551s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-356160 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.85s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-356160 --alsologtostderr -v=3
E0214 22:16:00.921772  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/kindnet-840948/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-356160 --alsologtostderr -v=3: (12.159809129s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-356160 -n embed-certs-356160
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-356160 -n embed-certs-356160: exit status 7 (141.584585ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-356160 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (56.73s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-356160 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1
E0214 22:16:10.214979  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/enable-default-cni-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:16:18.932540  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/custom-flannel-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:16:20.032986  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/flannel-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:16:28.626299  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/kindnet-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:16:42.452253  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/bridge-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:16:42.458572  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/bridge-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:16:42.469924  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/bridge-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:16:42.491422  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/bridge-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:16:42.532736  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/bridge-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:16:42.614057  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/bridge-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:16:42.775497  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/bridge-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:16:43.097177  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/bridge-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:16:43.739277  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/bridge-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:16:45.021332  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/bridge-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:16:47.582967  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/bridge-840948/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-356160 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1: (56.365922384s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-356160 -n embed-certs-356160
E0214 22:17:02.946743  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/bridge-840948/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (56.73s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-552505 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4e58242a-c073-4cb3-a409-8ffb42d15f04] Pending
helpers_test.go:344: "busybox" [4e58242a-c073-4cb3-a409-8ffb42d15f04] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [4e58242a-c073-4cb3-a409-8ffb42d15f04] Running
E0214 22:16:52.705136  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/bridge-840948/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.004599462s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-552505 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-552505 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-552505 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.068712185s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-552505 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-552505 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-552505 --alsologtostderr -v=3: (12.020582556s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-7ht7v" [772fd8e3-141c-4a50-a9b3-41aa2c9243bc] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002654174s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-7ht7v" [772fd8e3-141c-4a50-a9b3-41aa2c9243bc] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.012462437s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-356160 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-552505 -n default-k8s-diff-port-552505
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-552505 -n default-k8s-diff-port-552505: exit status 7 (88.199808ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-552505 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (54.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-552505 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-552505 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1: (53.585714798s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-552505 -n default-k8s-diff-port-552505
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (54.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-356160 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.68s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-356160 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-356160 -n embed-certs-356160
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-356160 -n embed-certs-356160: exit status 2 (376.785366ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-356160 -n embed-certs-356160
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-356160 -n embed-certs-356160: exit status 2 (369.459887ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-356160 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-356160 -n embed-certs-356160
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-356160 -n embed-certs-356160
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.68s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (39.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-096791 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1
E0214 22:17:23.428074  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/bridge-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:17:32.136764  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/enable-default-cni-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:17:41.955242  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/flannel-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:17:45.967812  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/addons-794492/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:17:49.847359  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/calico-840948/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-096791 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1: (39.286542788s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (39.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.48s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-096791 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-096791 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.484167825s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.48s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.78s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-096791 --alsologtostderr -v=3
E0214 22:18:04.389938  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/bridge-840948/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-096791 --alsologtostderr -v=3: (1.778627011s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.78s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-096791 -n newest-cni-096791
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-096791 -n newest-cni-096791: exit status 7 (81.349112ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-096791 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (17.43s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-096791 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-096791 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1: (16.745159632s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-096791 -n newest-cni-096791
E0214 22:18:22.471609  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/no-preload-135482/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (17.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-pvdtz" [f325517f-b7f4-457a-a437-a2d69e869be7] Running
E0214 22:18:11.473405  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/old-k8s-version-553294/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:18:11.479767  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/old-k8s-version-553294/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:18:11.491173  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/old-k8s-version-553294/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:18:11.512546  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/old-k8s-version-553294/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:18:11.553929  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/old-k8s-version-553294/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:18:11.635636  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/old-k8s-version-553294/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:18:11.797629  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/old-k8s-version-553294/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004360667s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-pvdtz" [f325517f-b7f4-457a-a437-a2d69e869be7] Running
E0214 22:18:12.119550  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/old-k8s-version-553294/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:18:12.761530  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/old-k8s-version-553294/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:18:14.043660  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/old-k8s-version-553294/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:18:16.605115  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/old-k8s-version-553294/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:18:17.338644  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/no-preload-135482/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:18:17.344952  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/no-preload-135482/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:18:17.356335  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/no-preload-135482/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:18:17.377641  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/no-preload-135482/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:18:17.418941  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/no-preload-135482/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:18:17.500267  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/no-preload-135482/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:18:17.548613  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/calico-840948/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:18:17.662027  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/no-preload-135482/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004481453s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-552505 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
E0214 22:18:17.983390  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/no-preload-135482/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-552505 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.9s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-552505 --alsologtostderr -v=1
E0214 22:18:18.625598  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/no-preload-135482/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-552505 --alsologtostderr -v=1: (1.290385917s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-552505 -n default-k8s-diff-port-552505
E0214 22:18:19.909440  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/no-preload-135482/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-552505 -n default-k8s-diff-port-552505: exit status 2 (457.93032ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-552505 -n default-k8s-diff-port-552505
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-552505 -n default-k8s-diff-port-552505: exit status 2 (504.890078ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-552505 --alsologtostderr -v=1
E0214 22:18:21.727154  278186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/old-k8s-version-553294/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 unpause -p default-k8s-diff-port-552505 --alsologtostderr -v=1: (1.150108319s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-552505 -n default-k8s-diff-port-552505
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-552505 -n default-k8s-diff-port-552505
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.90s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.44s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-096791 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.44s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.86s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-096791 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p newest-cni-096791 --alsologtostderr -v=1: (1.338140835s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-096791 -n newest-cni-096791
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-096791 -n newest-cni-096791: exit status 2 (319.399346ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-096791 -n newest-cni-096791
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-096791 -n newest-cni-096791: exit status 2 (326.315276ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-096791 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-096791 -n newest-cni-096791
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-096791 -n newest-cni-096791
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.86s)

                                                
                                    

Test skip (32/331)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.32.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.59s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-233506 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-233506" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-233506
--- SKIP: TestDownloadOnlyKic (0.59s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.38s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:789: skipping: crio not supported
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-794492 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.38s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1804: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:480: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:567: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:84: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-840948 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-840948

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-840948

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-840948

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-840948

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-840948

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-840948

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-840948

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-840948

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-840948

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-840948

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-840948"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-840948"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-840948"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-840948

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-840948"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-840948"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-840948" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-840948" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-840948" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-840948" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-840948" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-840948" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-840948" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-840948" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-840948"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-840948"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-840948"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-840948"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-840948"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-840948" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-840948" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-840948" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-840948"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-840948"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-840948"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-840948"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-840948"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20315-272800/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 14 Feb 2025 21:55:29 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.76.2:8443
name: pause-289178
contexts:
- context:
cluster: pause-289178
extensions:
- extension:
last-update: Fri, 14 Feb 2025 21:55:29 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: pause-289178
name: pause-289178
current-context: pause-289178
kind: Config
preferences: {}
users:
- name: pause-289178
user:
client-certificate: /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/pause-289178/client.crt
client-key: /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/pause-289178/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-840948

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-840948"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-840948"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-840948"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-840948"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-840948"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-840948"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-840948"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-840948"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-840948"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-840948"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-840948"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-840948"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-840948"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-840948"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-840948"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-840948"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-840948"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-840948"

                                                
                                                
----------------------- debugLogs end: kubenet-840948 [took: 3.708687443s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-840948" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-840948
--- SKIP: TestNetworkPlugins/group/kubenet (3.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-840948 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-840948

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-840948

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-840948

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-840948

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-840948

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-840948

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-840948

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-840948

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-840948

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-840948

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-840948"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-840948"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-840948"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-840948

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-840948"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-840948"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-840948" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-840948" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-840948" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-840948" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-840948" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-840948" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-840948" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-840948" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-840948"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-840948"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-840948"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-840948"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-840948"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-840948

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-840948

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-840948" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-840948" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-840948

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-840948

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-840948" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-840948" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-840948" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-840948" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-840948" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-840948"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-840948"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-840948"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-840948"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-840948"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20315-272800/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 14 Feb 2025 21:55:29 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.76.2:8443
name: pause-289178
contexts:
- context:
cluster: pause-289178
extensions:
- extension:
last-update: Fri, 14 Feb 2025 21:55:29 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: pause-289178
name: pause-289178
current-context: pause-289178
kind: Config
preferences: {}
users:
- name: pause-289178
user:
client-certificate: /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/pause-289178/client.crt
client-key: /home/jenkins/minikube-integration/20315-272800/.minikube/profiles/pause-289178/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-840948

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-840948"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-840948"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-840948"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-840948"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-840948"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-840948"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-840948"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-840948"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-840948"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-840948"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-840948"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-840948"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-840948"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-840948"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-840948"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-840948"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-840948"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-840948" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-840948"

                                                
                                                
----------------------- debugLogs end: cilium-840948 [took: 5.268843389s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-840948" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-840948
--- SKIP: TestNetworkPlugins/group/cilium (5.79s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-457119" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-457119
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
Copied to clipboard